Does Self-Rationalization Improve Robustness to Spurious Correlations?

Alexis Ross, Matthew Peters, Ana Marasovic


Abstract
Rationalization is fundamental to human reasoning and learning. NLP models trained to produce rationales along with predictions, called self-rationalization models, have been investigated for their interpretability and utility to end-users. However, the extent to which training with human-written rationales facilitates learning remains an under-explored question. We ask whether training models to self-rationalize can aid in their learning to solve tasks for the right reasons. Specifically, we evaluate how training self-rationalization models with free-text rationales affects robustness to spurious correlations in fine-tuned encoder-decoder and decoder-only models of six different sizes. We evaluate robustness to spurious correlations by measuring performance on 1) manually annotated challenge datasets and 2) subsets of original test sets where reliance on spurious correlations would fail to produce correct answers. We find that while self-rationalization can improve robustness to spurious correlations in low-resource settings, it tends to hurt robustness in higher-resource settings. Furthermore, these effects depend on model family and size, as well as on rationale content. Together, our results suggest that explainability can come at the cost of robustness; thus, appropriate care should be taken when training self-rationalizing models with the goal of creating more trustworthy models.
Anthology ID:
2022.emnlp-main.501
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7403–7416
Language:
URL:
https://aclanthology.org/2022.emnlp-main.501
DOI:
10.18653/v1/2022.emnlp-main.501
Bibkey:
Cite (ACL):
Alexis Ross, Matthew Peters, and Ana Marasovic. 2022. Does Self-Rationalization Improve Robustness to Spurious Correlations?. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7403–7416, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Does Self-Rationalization Improve Robustness to Spurious Correlations? (Ross et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.501.pdf