Attention weights accurately predict language representations in the brain

Mathis Lamarre, Catherine Chen, Fatma Deniz


Abstract
In Transformer-based language models (LMs) the attention mechanism converts token embeddings into contextual embeddings that incorporate information from neighboring words. The resulting contextual hidden state embeddings have enabled highly accurate models of brain responses, suggesting that the attention mechanism constructs contextual embeddings that carry information reflected in language-related brain representations. However, it is unclear whether the attention weights that are used to integrate information across words are themselves related to language representations in the brain. To address this question we analyzed functional magnetic resonance imaging (fMRI) recordings of participants reading English language narratives. We provided the narrative text as input to two LMs (BERT and GPT-2) and extracted their corresponding attention weights. We then used encoding models to determine how well attention weights can predict recorded brain responses. We find that attention weights accurately predict brain responses in much of the frontal and temporal cortices. Our results suggest that the attention mechanism itself carries information that is reflected in brain representations. Moreover, these results indicate cortical areas in which context integration may occur.
Anthology ID:
2022.findings-emnlp.330
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4513–4529
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.330
DOI:
10.18653/v1/2022.findings-emnlp.330
Bibkey:
Cite (ACL):
Mathis Lamarre, Catherine Chen, and Fatma Deniz. 2022. Attention weights accurately predict language representations in the brain. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4513–4529, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Attention weights accurately predict language representations in the brain (Lamarre et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.330.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.330.mp4