Learning from Others: Similarity-based Regularization for Mitigating Dataset Bias.

Reda Igbaria, Yonatan Belinkov


Abstract
Common methods for mitigating spurious correlations in natural language understanding (NLU) usually operate in the output space, encouraging a main model to behave differently from a bias model by down-weighing examples where the bias model is confident.While improving out of distribution (OOD) performance, it was recently observed that the internal representations of the presumably debiased models are actually more, rather than less biased. We propose SimgReg, a new method for debiasing internal model components via similarity-based regularization, in representation space: We encourage the model to learn representations that are either similar to an unbiased model or different from a biased model. We experiment with three NLU tasks and different kinds of biases.We find that SimReg improves OOD performance, with little in-distribution degradation. Moreover, the representations learned by SimReg are less biased than in other methods.
Anthology ID:
2024.repl4nlp-1.4
Volume:
Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Chen Zhao, Marius Mosbach, Pepa Atanasova, Seraphina Goldfarb-Tarrent, Peter Hase, Arian Hosseini, Maha Elbayad, Sandro Pezzelle, Maximilian Mozes
Venues:
RepL4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
37–50
Language:
URL:
https://aclanthology.org/2024.repl4nlp-1.4
DOI:
Bibkey:
Cite (ACL):
Reda Igbaria and Yonatan Belinkov. 2024. Learning from Others: Similarity-based Regularization for Mitigating Dataset Bias.. In Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024), pages 37–50, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Learning from Others: Similarity-based Regularization for Mitigating Dataset Bias. (Igbaria & Belinkov, RepL4NLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.repl4nlp-1.4.pdf