Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models

Tianlu Wang, Rohit Sridhar, Diyi Yang, Xuezhi Wang


Abstract
Recently, NLP models have achieved remarkable progress across a variety of tasks; however, they have also been criticized for being not robust. Many robustness problems can be attributed to models exploiting “spurious correlations”, or “shortcuts” between the training data and the task labels. Most existing work identifies a limited set of task-specific shortcuts via human priors or error analyses, which requires extensive expertise and efforts. In this paper, we aim to automatically identify such spurious correlations in NLP models at scale. We first leverage existing interpretability methods to extract tokens that significantly affect model’s decision process from the input text. We then distinguish “genuine” tokens and “spurious” tokens by analyzing model predictions across multiple corpora and further verify them through knowledge-aware perturbations. We show that our proposed method can effectively and efficiently identify a scalable set of “shortcuts”, and mitigating these leads to more robust models in multiple applications.
Anthology ID:
2022.findings-naacl.130
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1719–1729
Language:
URL:
https://aclanthology.org/2022.findings-naacl.130
DOI:
10.18653/v1/2022.findings-naacl.130
Bibkey:
Cite (ACL):
Tianlu Wang, Rohit Sridhar, Diyi Yang, and Xuezhi Wang. 2022. Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1719–1729, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.130.pdf
Video:
 https://aclanthology.org/2022.findings-naacl.130.mp4
Code
 tianlu-wang/Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-Models
Data
DBpediaSSTSST-2