Debias NLU Datasets via Training-free Perturbations

Qi Guo, Yuanhang Tang, Yawen Ouyang, Zhen Wu, Xinyu Dai


Abstract
Several recent studies have shown that advanced models for natural language understanding (NLU) are prone to capture biased features that are independent of the task but spuriously correlated to labels. Such models often perform well on in-distribution (ID) datasets but fail to generalize to out-of-distribution (OOD) datasets. Existing solutions can be separated into two orthogonal approaches: model-centric methods and data-centric methods. Model-centric methods improve OOD performance at the expense of ID performance. Data-centric strategies usually boost both of them via data-level manipulations such as generative data augmentation. However, the high cost of fine-tuning a generator to produce valid samples limits the potential of such approaches. To address this issue, we propose PDD, a framework that conducts training-free Perturbations on samples containing biased features to Debias NLU Datasets. PDD works by iteratively conducting perturbations via pre-trained mask language models (MLM). PDD exhibits the advantage of low cost by adopting a training-free perturbation strategy and further improves the label consistency by utilizing label information during perturbations. Extensive experiments demonstrate that PDD shows competitive performance with previous state-of-the-art debiasing strategies. When combined with the model-centric debiasing methods, PDD establishes a new state-of-the-art.
Anthology ID:
2023.findings-emnlp.726
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10886–10901
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.726
DOI:
10.18653/v1/2023.findings-emnlp.726
Bibkey:
Cite (ACL):
Qi Guo, Yuanhang Tang, Yawen Ouyang, Zhen Wu, and Xinyu Dai. 2023. Debias NLU Datasets via Training-free Perturbations. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10886–10901, Singapore. Association for Computational Linguistics.
Cite (Informal):
Debias NLU Datasets via Training-free Perturbations (Guo et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.726.pdf