Stubborn Lexical Bias in Data and Models

Sofia Serrano, Jesse Dodge, Noah A. Smith


Abstract
In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect of such correlations are typically examined feature by feature. We investigate the cumulative impact on a model of many such intersecting features. Using a new statistical method, we examine whether such spurious patterns in data appear in models trained on the data. We select two tasks— natural language inference and duplicate-question detection— for which any unigram feature on its own should ideally be uninformative, which gives us a large pool of automatically extracted features with which to experiment. The large size of this pool allows us to investigate the intersection of features spuriously associated with (potentially different) labels. We then apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations, and examine how doing so affects models trained on the reweighted data. Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models, including worsened bias for slightly more complex features (bigrams). We close with discussion about the implications of our results on what it means to “debias” training data, and how issues of data quality can affect model bias.
Anthology ID:
2023.findings-acl.516
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8131–8146
Language:
URL:
https://aclanthology.org/2023.findings-acl.516
DOI:
10.18653/v1/2023.findings-acl.516
Bibkey:
Cite (ACL):
Sofia Serrano, Jesse Dodge, and Noah A. Smith. 2023. Stubborn Lexical Bias in Data and Models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8131–8146, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Stubborn Lexical Bias in Data and Models (Serrano et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.516.pdf
Video:
 https://aclanthology.org/2023.findings-acl.516.mp4