Hierarchical syntactic structure in human-like language models

Michael Wolfman, Donald Dunagan, Jonathan Brennan, John Hale


Abstract
Language models (LMs) are a meeting point for cognitive modeling and computational linguistics. How should they be designed to serve as adequate cognitive models? To address this question, this study contrasts two Transformer-based LMs that share the same architecture. Only one of them analyzes sentences in terms of explicit hierarchical structure. Evaluating the two LMs against fMRI time series via the surprisal complexity metric, the results implicate the superior temporal gyrus. These findings underline the need for hierarchical sentence structures in word-by-word models of human language comprehension.
Anthology ID:
2024.cmcl-1.6
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
72–80
Language:
URL:
https://aclanthology.org/2024.cmcl-1.6
DOI:
Bibkey:
Cite (ACL):
Michael Wolfman, Donald Dunagan, Jonathan Brennan, and John Hale. 2024. Hierarchical syntactic structure in human-like language models. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 72–80, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Hierarchical syntactic structure in human-like language models (Wolfman et al., CMCL-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cmcl-1.6.pdf