Holmes ⌕ A Benchmark to Assess the Linguistic Competence of Language Models

Andreas Waldis, Yotam Perlitz, Leshem Choshen, Yufang Hou, Iryna Gurevych


Abstract
We introduce Holmes, a new benchmark designed to assess language models’ (LMs’) linguistic competence—their unconscious understanding of linguistic phenomena. Specifically, we use classifier-based probing to examine LMs’ internal representations regarding distinct linguistic phenomena (e.g., part-of-speech tagging). As a result, we meet recent calls to disentangle LMs’ linguistic competence from other cognitive abilities, such as following instructions in prompting-based evaluations. Composing Holmes, we review over 270 probing studies and include more than 200 datasets to assess syntax, morphology, semantics, reasoning, and discourse phenomena. Analyzing over 50 LMs reveals that, aligned with known trends, their linguistic competence correlates with model size. However, surprisingly, model architecture and instruction tuning also significantly influence performance, particularly in morphology and syntax. Finally, we propose FlashHolmes, a streamlined version that reduces the computation load while maintaining high-ranking precision.
Anthology ID:
2024.tacl-1.88
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1616–1647
Language:
URL:
https://aclanthology.org/2024.tacl-1.88/
DOI:
10.1162/tacl_a_00718
Bibkey:
Cite (ACL):
Andreas Waldis, Yotam Perlitz, Leshem Choshen, Yufang Hou, and Iryna Gurevych. 2024. Holmes ⌕ A Benchmark to Assess the Linguistic Competence of Language Models. Transactions of the Association for Computational Linguistics, 12:1616–1647.
Cite (Informal):
Holmes ⌕ A Benchmark to Assess the Linguistic Competence of Language Models (Waldis et al., TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.88.pdf