Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens

Nitish Joshi, Xiang Pan, He He


Abstract
The term ‘spurious correlations’ has been used in NLP to informally denote any undesirable feature-label correlations. However, a correlation can be undesirable because (i) the feature is irrelevant to the label (e.g. punctuation in a review), or (ii) the feature’s effect on the label depends on the context (e.g. negation words in a review), which is ubiquitous in language tasks. In case (i), we want the model to be invariant to the feature, which is neither necessary nor sufficient for prediction. But in case (ii), even an ideal model (e.g. humans) must rely on the feature, since it is necessary (but not sufficient) for prediction. Therefore, a more fine-grained treatment of spurious features is needed to specify the desired model behavior. We formalize this distinction using a causal model and probabilities of necessity and sufficiency, which delineates the causal relations between a feature and a label. We then show that this distinction helps explain results of existing debiasing methods on different spurious features, and demystifies surprising results such as the encoding of spurious features in model representations after debiasing.
Anthology ID:
2022.emnlp-main.666
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9804–9817
Language:
URL:
https://aclanthology.org/2022.emnlp-main.666
DOI:
10.18653/v1/2022.emnlp-main.666
Bibkey:
Cite (ACL):
Nitish Joshi, Xiang Pan, and He He. 2022. Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9804–9817, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens (Joshi et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.666.pdf