Gabriele Merlin


2024

pdf bib
Language models and brains align due to more than next-word prediction and word-level information
Gabriele Merlin | Mariya Toneva
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Pretrained language models have been shown to significantly predict brain recordings of people comprehending language. Recent work suggests that the prediction of the next word is a key mechanism that contributes to this alignment. What is not yet understood is whether prediction of the next word is necessary for this observed alignment or simply sufficient, and whether there are other shared mechanisms or information that are similarly important. In this work, we take a step towards understanding the reasons for brain alignment via two simple perturbations in popular pretrained language models. These perturbations help us design contrasts that can control for different types of information. By contrasting the brain alignment of these differently perturbed models, we show that improvements in alignment with brain recordings are due to more than improvements in next-word prediction and word-level information.
Search
Co-authors
Venues