Montague semantics and modifier consistency measurement in neural language models

Danilo Silva de Carvalho, Edoardo Manino, Julia Rozanova, Lucas Cordeiro, André Freitas


Abstract
This work proposes a novel methodology for measuring compositional behavior in contemporary language embedding models. Specifically, we focus on adjectival modifier phenomena in adjective-noun phrases. In recent years, distributional language representation models have demonstrated great practical success. At the same time, the need for interpretability has elicited questions on their intrinsic properties and capabilities. Crucially, distributional models are often inconsistent when dealing with compositional phenomena in natural language, which has significant implications for their safety and fairness. Despite this, most current research on compositionality is directed towards improving their performance on similarity tasks only. This work takes a different approach, introducing three novel tests of compositional behavior inspired by Montague semantics. Our experimental results indicate that current neural language models do not behave according to the expected linguistic theories. This indicates that current language models may lack the capability to capture the semantic properties we evaluated on limited context, or that linguistic theories from Montagovian tradition may not match the expected capabilities of distributional models.
Anthology ID:
2025.coling-main.370
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5515–5529
Language:
URL:
https://aclanthology.org/2025.coling-main.370/
DOI:
Bibkey:
Cite (ACL):
Danilo Silva de Carvalho, Edoardo Manino, Julia Rozanova, Lucas Cordeiro, and André Freitas. 2025. Montague semantics and modifier consistency measurement in neural language models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 5515–5529, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Montague semantics and modifier consistency measurement in neural language models (Silva de Carvalho et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.370.pdf