Do Language Models Exhibit Human-like Structural Priming Effects?

Jaap Jumelet, Willem Zuidema, Arabella Sinclair


Abstract
We explore which linguistic factors—at the sentence and token level—play an important role in influencing language model predictions, and investigate whether these are reflective of results found in humans and human corpora (Gries and Kootstra, 2017). We make use of the structural priming paradigm—where recent exposure to a structure facilitates processing of the same structure—to investigate where priming effects manifest, and what factors predict them. We find these effects can be explained via the inverse frequency effect found in human priming, where rarer elements within a prime increase priming effects, as well as lexical dependence between prime and target. Our results provide an important piece in the puzzle of understanding how properties within their context affect structural prediction in language models.
Anthology ID:
2024.findings-acl.877
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14727–14742
Language:
URL:
https://aclanthology.org/2024.findings-acl.877
DOI:
Bibkey:
Cite (ACL):
Jaap Jumelet, Willem Zuidema, and Arabella Sinclair. 2024. Do Language Models Exhibit Human-like Structural Priming Effects?. In Findings of the Association for Computational Linguistics ACL 2024, pages 14727–14742, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Do Language Models Exhibit Human-like Structural Priming Effects? (Jumelet et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.877.pdf