Counterfactual reasoning: Testing language models’ understanding of hypothetical scenarios

Jiaxuan Li, Lang Yu, Allyson Ettinger


Abstract
Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on the understanding of real world. We tease these factors apart by leveraging counterfactual conditionals, which force language models to predict unusual consequences based on hypothetical propositions. We introduce a set of tests from psycholinguistic experiments, as well as larger-scale controlled datasets, to probe counterfactual predictions from five pre-trained language models. We find that models are consistently able to override real-world knowledge in counterfactual scenarios, and that this effect is more robust in case of stronger baseline world knowledge—however, we also find that for most models this effect appears largely to be driven by simple lexical cues. When we mitigate effects of both world knowledge and lexical cues to test knowledge of linguistic nuances of counterfactuals, we find that only GPT-3 shows sensitivity to these nuances, though this sensitivity is also non-trivially impacted by lexical associative factors.
Anthology ID:
2023.acl-short.70
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
804–815
Language:
URL:
https://aclanthology.org/2023.acl-short.70
DOI:
10.18653/v1/2023.acl-short.70
Bibkey:
Cite (ACL):
Jiaxuan Li, Lang Yu, and Allyson Ettinger. 2023. Counterfactual reasoning: Testing language models’ understanding of hypothetical scenarios. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 804–815, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Counterfactual reasoning: Testing language models’ understanding of hypothetical scenarios (Li et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-short.70.pdf
Video:
 https://aclanthology.org/2023.acl-short.70.mp4