Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Chandu, Vivek Srikumar, Sameer Singh, Noah Smith


Abstract
The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e.g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance. Are these correlations picked up by models trained on the full input data? To address this question, we propose a new evaluation method, Counterfactual Attentiveness Test (CAT). CAT uses counterfactuals by replacing part of the input with its counterpart from a different example (subject to some restrictions), expecting an attentive model to change its prediction. Using CAT, we systematically investigate established supervised and in-context learning models on ten datasets spanning four tasks: natural language inference, reading comprehension, paraphrase detection, and visual & language reasoning. CAT reveals that reliance on such correlations is mainly data-dependent. Surprisingly, we find that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves. Our results demonstrate that augmenting training or demonstration data with counterfactuals is effective in improving models’ attentiveness. We show that models’ attentiveness measured by CAT reveals different conclusions from solely measuring correlations in data.
Anthology ID:
2024.findings-emnlp.205
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3603–3623
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.205
DOI:
Bibkey:
Cite (ACL):
Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Chandu, Vivek Srikumar, Sameer Singh, and Noah Smith. 2024. Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3603–3623, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals (Elazar et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.205.pdf