Generics are puzzling. Can language models find the missing piece?

Gustavo Cilleruelo, Emily Allaway, Barry Haddow, Alexandra Birch


Abstract
Generic sentences express generalisations about the world without explicit quantification. Although generics are central to everyday communication, building a precise semantic framework has proven difficult, in part because speakers use generics to generalise properties with widely different statistical prevalence. In this work, we study the implicit quantification and context-sensitivity of generics by leveraging language models as models of language. We create ConGen, a dataset of 2873 naturally occurring generic and quantified sentences in context, and define p-acceptability, a metric based on surprisal that is sensitive to quantification. Our experiments show generics are more context-sensitive than determiner quantifiers and about 20% of naturally occurring generics we analyze express weak generalisations. We also explore how human biases in stereotypes can be observed in language models.
Anthology ID:
2025.coling-main.438
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6571–6588
Language:
URL:
https://aclanthology.org/2025.coling-main.438/
DOI:
Bibkey:
Cite (ACL):
Gustavo Cilleruelo, Emily Allaway, Barry Haddow, and Alexandra Birch. 2025. Generics are puzzling. Can language models find the missing piece?. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6571–6588, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Generics are puzzling. Can language models find the missing piece? (Cilleruelo et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.438.pdf