Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers

James Michaelov, Benjamin Bergen


Abstract
How well do language models deal with quantification? In this study, we focus on ‘few’-type quantifiers, as in ‘few children like toys’, which might pose a particular challenge for language models because the sentence components with out the quantifier are likely to co-occur, and ‘few’-type quantifiers are rare. We present 960 English sentence stimuli from two human neurolinguistic experiments to 22 autoregressive transformer models of differing sizes. Not only do all the models perform poorly on ‘few’-type quantifiers, but overall the larger the model, the worse its performance. This inverse scaling is consistent with previous work suggesting that larger models increasingly reflect online rather than offline human processing, and we argue that the decreasing performance of larger models may challenge uses of language models as the basis for natural language systems.
Anthology ID:
2023.findings-acl.891
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14162–14174
Language:
URL:
https://aclanthology.org/2023.findings-acl.891
DOI:
10.18653/v1/2023.findings-acl.891
Bibkey:
Cite (ACL):
James Michaelov and Benjamin Bergen. 2023. Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 14162–14174, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers (Michaelov & Bergen, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.891.pdf