Testing Pre-trained Language Models’ Understanding of Distributivity via Causal Mediation Analysis

Pangbo Ban, Yifan Jiang, Tianran Liu, Shane Steinert-Threlkeld


Abstract
To what extent do pre-trained language models grasp semantic knowledge regarding the phenomenon of distributivity? In this paper, we introduce DistNLI, a new diagnostic dataset for natural language inference that targets the semantic difference arising from distributivity, and employ the causal mediation analysis framework to quantify the model behavior and explore the underlying mechanism in this semantically-related task. We find that the extent of models’ understanding is associated with model size and vocabulary size. We also provide insights into how models encode such high-level semantic knowledge.
Anthology ID:
2022.blackboxnlp-1.26
Volume:
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Jasmijn Bastings, Yonatan Belinkov, Yanai Elazar, Dieuwke Hupkes, Naomi Saphra, Sarah Wiegreffe
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
314–324
Language:
URL:
https://aclanthology.org/2022.blackboxnlp-1.26
DOI:
10.18653/v1/2022.blackboxnlp-1.26
Bibkey:
Cite (ACL):
Pangbo Ban, Yifan Jiang, Tianran Liu, and Shane Steinert-Threlkeld. 2022. Testing Pre-trained Language Models’ Understanding of Distributivity via Causal Mediation Analysis. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 314–324, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Testing Pre-trained Language Models’ Understanding of Distributivity via Causal Mediation Analysis (Ban et al., BlackboxNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.blackboxnlp-1.26.pdf