Language models and brains align due to more than next-word prediction and word-level information

Gabriele Merlin, Mariya Toneva


Abstract
Pretrained language models have been shown to significantly predict brain recordings of people comprehending language. Recent work suggests that the prediction of the next word is a key mechanism that contributes to this alignment. What is not yet understood is whether prediction of the next word is necessary for this observed alignment or simply sufficient, and whether there are other shared mechanisms or information that are similarly important. In this work, we take a step towards understanding the reasons for brain alignment via two simple perturbations in popular pretrained language models. These perturbations help us design contrasts that can control for different types of information. By contrasting the brain alignment of these differently perturbed models, we show that improvements in alignment with brain recordings are due to more than improvements in next-word prediction and word-level information.
Anthology ID:
2024.emnlp-main.1024
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18431–18454
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1024
DOI:
Bibkey:
Cite (ACL):
Gabriele Merlin and Mariya Toneva. 2024. Language models and brains align due to more than next-word prediction and word-level information. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18431–18454, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Language models and brains align due to more than next-word prediction and word-level information (Merlin & Toneva, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1024.pdf