Not all quantifiers are equal: Probing Transformer-based language models’ understanding of generalised quantifiers

Tharindu Madusanka, Iqra Zahid, Hao Li, Ian Pratt-Hartmann, Riza Batista-Navarro


Abstract
How do different generalised quantifiers affect the behaviour of transformer-based language models (TLMs)? The recent popularity of TLMs and the central role generalised quantifiers have traditionally played in linguistics and logic bring this question into particular focus. The current research investigating this subject has not utilised a task defined purely in a logical sense, and thus, has not captured the underlying logical significance of generalised quantifiers. Consequently, they have not answered the aforementioned question faithfully or adequately. Therefore, we investigate how different generalised quantifiers affect TLMs by employing a textual entailment problem defined in a purely logical sense, namely, model-checking with natural language. Our approach permits the automatic construction of datasets with respect to which we can assess the ability of TLMs to learn the meanings of generalised quantifiers. Our investigation reveals that TLMs generally can comprehend the logical semantics of the most common generalised quantifiers, but that distinct quantifiers influence TLMs in varying ways.
Anthology ID:
2023.emnlp-main.536
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8680–8692
Language:
URL:
https://aclanthology.org/2023.emnlp-main.536
DOI:
10.18653/v1/2023.emnlp-main.536
Bibkey:
Cite (ACL):
Tharindu Madusanka, Iqra Zahid, Hao Li, Ian Pratt-Hartmann, and Riza Batista-Navarro. 2023. Not all quantifiers are equal: Probing Transformer-based language models’ understanding of generalised quantifiers. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8680–8692, Singapore. Association for Computational Linguistics.
Cite (Informal):
Not all quantifiers are equal: Probing Transformer-based language models’ understanding of generalised quantifiers (Madusanka et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.536.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.536.mp4