Quantifying Generalizations: Exploring the Divide Between Human and LLMs’ Sensitivity to Quantification

Claudia Collacciani, Giulia Rambelli, Marianna Bolognesi


Abstract
Generics are expressions used to communicate abstractions about categories. While conveying general truths (e.g., “Birds fly”), generics have the interesting property to admit exceptions (e.g., penguins do not fly). Statements of this type help us organizing our knowledge of the world, and form the basis of how we express it (Hampton, 2012; Leslie, 2014).This study investigates how Large Language Models (LLMs) interpret generics, drawing upon psycholinguistic experimental methodologies. Understanding how LLMs interpret generic statements serves not only as a measure of their ability to abstract but also arguably plays a role in their encoding of stereotypes. Given that generics interpretation necessitates a comparison with explicitly quantified sentences, we explored i.) whether LLMs can correctly associate a quantifier with the generic structure, and ii.) whether the presence of a generic sentence as context influences the outcomes of quantifiers. We evaluated LLMs using both Surprisal distributions and prompting techniques.The findings indicate that models do not exhibit a strong sensitivity to quantification. Nevertheless, they seem to encode a meaning linked with the generic structure, which leads them to adjust their answers accordingly when a generalization is provided as context.
Anthology ID:
2024.acl-long.636
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11811–11822
Language:
URL:
https://aclanthology.org/2024.acl-long.636
DOI:
Bibkey:
Cite (ACL):
Claudia Collacciani, Giulia Rambelli, and Marianna Bolognesi. 2024. Quantifying Generalizations: Exploring the Divide Between Human and LLMs’ Sensitivity to Quantification. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11811–11822, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Quantifying Generalizations: Exploring the Divide Between Human and LLMs’ Sensitivity to Quantification (Collacciani et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.636.pdf