Can Transformer Language Models Predict Psychometric Properties?

Antonio Laverghetta Jr., Animesh Nighojkar, Jamshidbek Mirzakhalov, John Licato


Abstract
Transformer-based language models (LMs) continue to advance state-of-the-art performance on NLP benchmark tasks, including tasks designed to mimic human-inspired “commonsense” competencies. To better understand the degree to which LMs can be said to have certain linguistic reasoning skills, researchers are beginning to adapt the tools and concepts of the field of psychometrics. But to what extent can the benefits flow in the other direction? I.e., can LMs be of use in predicting what the psychometric properties of test items will be when those items are given to human participants? We gather responses from numerous human participants and LMs (transformer- and non-transformer-based) on a broad diagnostic test of linguistic competencies. We then use the responses to calculate standard psychometric properties of the items in the diagnostic test, using the human responses and the LM responses separately. We then determine how well these two sets of predictions match. We find cases in which transformer-based LMs predict psychometric properties consistently well in certain categories but consistently poorly in others, thus providing new insights into fundamental similarities and differences between human and LM reasoning.
Anthology ID:
2021.starsem-1.2
Volume:
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics
Month:
August
Year:
2021
Address:
Online
Editors:
Lun-Wei Ku, Vivi Nastase, Ivan Vulić
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
12–25
Language:
URL:
https://aclanthology.org/2021.starsem-1.2
DOI:
10.18653/v1/2021.starsem-1.2
Bibkey:
Cite (ACL):
Antonio Laverghetta Jr., Animesh Nighojkar, Jamshidbek Mirzakhalov, and John Licato. 2021. Can Transformer Language Models Predict Psychometric Properties?. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 12–25, Online. Association for Computational Linguistics.
Cite (Informal):
Can Transformer Language Models Predict Psychometric Properties? (Laverghetta Jr. et al., *SEM 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.starsem-1.2.pdf
Code
 Advancing-Machine-Human-Reasoning-Lab/transformer-psychometrics
Data
ANLIChaosNLIGLUEMultiNLISNLITaxiNLI