Psychometric Predictive Power of Large Language Models

Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin


Abstract
Instruction tuning aligns the response of large language models (LLMs) with human preferences.Despite such efforts in human–LLM alignment, we find that instruction tuning does not always make LLMs human-like from a cognitive modeling perspective. More specifically, next-word probabilities estimated by instruction-tuned LLMs are often worse at simulating human reading behavior than those estimated by base LLMs.In addition, we explore prompting methodologies for simulating human reading behavior with LLMs. Our results show that prompts reflecting a particular linguistic hypothesis improve psychometric predictive power, but are still inferior to small base models.These findings highlight that recent advancements in LLMs, i.e., instruction tuning and prompting, do not offer better estimates than direct probability measurements from base LLMs in cognitive modeling. In other words, pure next-word probability remains a strong predictor for human reading behavior, even in the age of LLMs.
Anthology ID:
2024.findings-naacl.129
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1983–2005
Language:
URL:
https://aclanthology.org/2024.findings-naacl.129
DOI:
Bibkey:
Cite (ACL):
Tatsuki Kuribayashi, Yohei Oseki, and Timothy Baldwin. 2024. Psychometric Predictive Power of Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1983–2005, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Psychometric Predictive Power of Large Language Models (Kuribayashi et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.129.pdf
Copyright:
 2024.findings-naacl.129.copyright.pdf