Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens

Byung-Doh Oh, William Schuler


Abstract
Recent psycholinguistic studies have drawn conflicting conclusions about the relationship between the quality of a language model and the ability of its surprisal estimates to predict human reading times, which has been speculated to be due to the large gap in both the amount of training data and model capacity across studies. The current work aims to consolidate these findings by evaluating surprisal estimates from Transformer-based language model variants that vary systematically in the amount of training data and model capacity on their ability to predict human reading times. The results show that surprisal estimates from most variants with contemporary model capacities provide the best fit after seeing about two billion training tokens, after which they begin to diverge from humanlike expectations. Additionally, newly-trained smaller model variants reveal a ‘tipping point’ at convergence, after which the decrease in language model perplexity begins to result in poorer fits to human reading times. These results suggest that the massive amount of training data is mainly responsible for the poorer fit achieved by surprisal from larger pre-trained language models, and that a certain degree of model capacity is necessary for Transformer-based language models to capture humanlike expectations.
Anthology ID:
2023.findings-emnlp.128
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1915–1921
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.128
DOI:
10.18653/v1/2023.findings-emnlp.128
Bibkey:
Cite (ACL):
Byung-Doh Oh and William Schuler. 2023. Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1915–1921, Singapore. Association for Computational Linguistics.
Cite (Informal):
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens (Oh & Schuler, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.128.pdf