FairFlow: Mitigating Dataset Biases through Undecided Learning for Natural Language Understanding

Jiali Cheng, Hadi Amiri


Abstract
Language models are prone to dataset biases, known as shortcuts and spurious correlations in data, which often result in performance drop on new data. We present a new debiasing framework called FairFlow that mitigates dataset biases by learning to be undecided in its predictions for data samples or representations associated with known or unknown biases. The framework introduces two key components: a suite of data and model perturbation operations that generate different biased views of input samples, and a contrastive objective that learns debiased and robust representations from the resulting biased views of samples. Experiments show that FairFlow outperforms existing debiasing methods, particularly against out-of-domain and hard test samples without compromising the in-domain performance.
Anthology ID:
2024.emnlp-main.1225
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21960–21975
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1225/
DOI:
10.18653/v1/2024.emnlp-main.1225
Bibkey:
Cite (ACL):
Jiali Cheng and Hadi Amiri. 2024. FairFlow: Mitigating Dataset Biases through Undecided Learning for Natural Language Understanding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21960–21975, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
FairFlow: Mitigating Dataset Biases through Undecided Learning for Natural Language Understanding (Cheng & Amiri, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1225.pdf