Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction

Shauli Ravfogel, Grusha Prasad, Tal Linzen, Yoav Goldberg


Abstract
When language models process syntactically complex sentences, do they use their representations of syntax in a manner that is consistent with the grammar of the language? We propose AlterRep, an intervention-based method to address this question. For any linguistic feature of a given sentence, AlterRep generates counterfactual representations by altering how the feature is encoded, while leaving in- tact all other aspects of the original representation. By measuring the change in a model’s word prediction behavior when these counterfactual representations are substituted for the original ones, we can draw conclusions about the causal effect of the linguistic feature in question on the model’s behavior. We apply this method to study how BERT models of different sizes process relative clauses (RCs). We find that BERT variants use RC boundary information during word prediction in a manner that is consistent with the rules of English grammar; this RC boundary information generalizes to a considerable extent across different RC types, suggesting that BERT represents RCs as an abstract linguistic category.
Anthology ID:
2021.conll-1.15
Volume:
Proceedings of the 25th Conference on Computational Natural Language Learning
Month:
November
Year:
2021
Address:
Online
Venues:
CoNLL | EMNLP
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
194–209
Language:
URL:
https://aclanthology.org/2021.conll-1.15
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.conll-1.15.pdf