Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention

Soo Hyun Ryu, Richard Lewis


Abstract
We advance a novel explanation of similarity-based interference effects in subject-verb and reflexive pronoun agreement processing, grounded in surprisal values computed from a pretrained large-scale Transformer model, GPT-2. Specifically, we show that surprisal of the verb or reflexive pronoun predicts facilitatory interference effects in ungrammatical sentences, where a distractor noun that matches in number with the verb or pronouns leads to faster reading times, despite the distractor not participating in the agreement relation. We review the human empirical evidence for such effects, including recent meta-analyses and large-scale studies. We also show that attention patterns (indexed by entropy and other measures) in the Transformer show patterns of diffuse attention in the presence of similar distractors, consistent with cue-based retrieval models of parsing. But in contrast to these models, the attentional cues and memory representations are learned entirely from the simple self-supervised task of predicting the next word.
Anthology ID:
2021.cmcl-1.6
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
June
Year:
2021
Address:
Online
Editors:
Emmanuele Chersoni, Nora Hollenstein, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
61–71
Language:
URL:
https://aclanthology.org/2021.cmcl-1.6
DOI:
10.18653/v1/2021.cmcl-1.6
Bibkey:
Cite (ACL):
Soo Hyun Ryu and Richard Lewis. 2021. Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 61–71, Online. Association for Computational Linguistics.
Cite (Informal):
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention (Ryu & Lewis, CMCL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.cmcl-1.6.pdf