Don’t Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques

Hossein Amirkhani, Mohammad Taher Pilehvar


Abstract
Existing techniques for mitigating dataset bias often leverage a biased model to identify biased instances. The role of these biased instances is then reduced during the training of the main model to enhance its robustness to out-of-distribution data. A common core assumption of these techniques is that the main model handles biased instances similarly to the biased model, in that it will resort to biases whenever available. In this paper, we show that this assumption does not hold in general. We carry out a critical investigation on two well-known datasets in the domain, MNLI and FEVER, along with two biased instance detection methods, partial-input and limited-capacity models. Our experiments show that in around a third to a half of instances, the biased model is unable to predict the main model’s behavior, highlighted by the significantly different parts of the input on which they base their decisions. Based on a manual validation, we also show that this estimate is highly in line with human interpretation. Our findings suggest that down-weighting of instances detected by bias detection methods, which is a widely-practiced procedure, is an unnecessary waste of training data. We release our code to facilitate reproducibility and future research.
Anthology ID:
2021.findings-emnlp.405
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4720–4728
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.405
DOI:
10.18653/v1/2021.findings-emnlp.405
Bibkey:
Cite (ACL):
Hossein Amirkhani and Mohammad Taher Pilehvar. 2021. Don’t Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4720–4728, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Don’t Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques (Amirkhani & Pilehvar, Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.405.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.405.mp4
Code
 h-amirkhani/debiasing-assumption
Data
FEVERMultiNLI