Probabilistic Predictions of People Perusing: Evaluating Metrics of Language Model Performance for Psycholinguistic Modeling

Yiding Hao, Simon Mendelsohn, Rachel Sterneck, Randi Martinez, Robert Frank


Abstract
By positing a relationship between naturalistic reading times and information-theoretic surprisal, surprisal theory (Hale, 2001; Levy, 2008) provides a natural interface between language models and psycholinguistic models. This paper re-evaluates a claim due to Goodkind and Bicknell (2018) that a language model’s ability to model reading times is a linear function of its perplexity. By extending Goodkind and Bicknell’s analysis to modern neural architectures, we show that the proposed relation does not always hold for Long Short-Term Memory networks, Transformers, and pre-trained models. We introduce an alternate measure of language modeling performance called predictability norm correlation based on Cloze probabilities measured from human subjects. Our new metric yields a more robust relationship between language model quality and psycholinguistic modeling performance that allows for comparison between models with different training configurations.
Anthology ID:
2020.cmcl-1.10
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
November
Year:
2020
Address:
Online
Editors:
Emmanuele Chersoni, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
75–86
Language:
URL:
https://aclanthology.org/2020.cmcl-1.10
DOI:
10.18653/v1/2020.cmcl-1.10
Bibkey:
Cite (ACL):
Yiding Hao, Simon Mendelsohn, Rachel Sterneck, Randi Martinez, and Robert Frank. 2020. Probabilistic Predictions of People Perusing: Evaluating Metrics of Language Model Performance for Psycholinguistic Modeling. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 75–86, Online. Association for Computational Linguistics.
Cite (Informal):
Probabilistic Predictions of People Perusing: Evaluating Metrics of Language Model Performance for Psycholinguistic Modeling (Hao et al., CMCL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.cmcl-1.10.pdf
Video:
 https://slideslive.com/38939682
Data
Billion Word BenchmarkOne Billion Word BenchmarkWebTextWikiText-103WikiText-2