Not Every Metric is Equal: Cognitive Models for Predicting N400 and P600 Components During Reading Comprehension

Lavinia Salicchi, Yu-Yin Hsu


Abstract
In recent years, numerous studies have sought to understand the cognitive dynamics underlying language processing by modeling reading times and ERP amplitudes using computational metrics like surprisal. In the present paper, we examine the predictive power of surprisal, entropy, and a novel metric based on semantic similarity for N400 and P600. Our experiments, conducted with Mandarin Chinese materials, revealed three key findings: 1) expectancy plays a primary role for N400; 2) P600 also reflects the cognitive effort required to evaluate linguistic input semantically; and 3) during the time window of interest, information uncertainty influences the language processing the most. Our findings show how computational metrics that capture distinct cognitive dimensions can effectively address psycholinguistic questions.
Anthology ID:
2025.coling-main.246
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3648–3654
Language:
URL:
https://aclanthology.org/2025.coling-main.246/
DOI:
Bibkey:
Cite (ACL):
Lavinia Salicchi and Yu-Yin Hsu. 2025. Not Every Metric is Equal: Cognitive Models for Predicting N400 and P600 Components During Reading Comprehension. In Proceedings of the 31st International Conference on Computational Linguistics, pages 3648–3654, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Not Every Metric is Equal: Cognitive Models for Predicting N400 and P600 Components During Reading Comprehension (Salicchi & Hsu, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.246.pdf