A Data Cartography based MixUp for Pre-trained Language Models

Seo Yeon Park, Cornelia Caragea


Abstract
MixUp is a data augmentation strategy where additional samples are generated during training by combining random pairs of training samples and their labels. However, selecting random pairs is not potentially an optimal choice. In this work, we propose TDMixUp, a novel MixUp strategy that leverages Training Dynamics and allows more informative samples to be combined for generating new data samples. Our proposed TDMixUp first measures confidence, variability, (Swayamdipta et al., 2020), and Area Under the Margin (AUM) (Pleiss et al., 2020) to identify the characteristics of training samples (e.g., as easy-to-learn or ambiguous samples), and then interpolates these characterized samples. We empirically validate that our method not only achieves competitive performance using a smaller subset of the training data compared with strong baselines, but also yields lower expected calibration error on the pre-trained language model, BERT, on both in-domain and out-of-domain settings in a wide range of NLP tasks. We publicly release our code.
Anthology ID:
2022.naacl-main.314
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4244–4250
Language:
URL:
https://aclanthology.org/2022.naacl-main.314
DOI:
10.18653/v1/2022.naacl-main.314
Bibkey:
Cite (ACL):
Seo Yeon Park and Cornelia Caragea. 2022. A Data Cartography based MixUp for Pre-trained Language Models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4244–4250, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
A Data Cartography based MixUp for Pre-trained Language Models (Park & Caragea, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.314.pdf
Software:
 2022.naacl-main.314.software.zip
Video:
 https://aclanthology.org/2022.naacl-main.314.mp4
Code
 seoyeon-p/tdmixup
Data
MultiNLISNLISWAG