Manuel Vargas Guzmán
Also published as: Manuel Vargas Guzmán
2026
Teaching Small Language Models to Learn Logic through Meta-Learning
Leonardo Bertolazzi | Manuel Vargas Guzmán | Raffaella Bernardi | Maciej Malicki | Jakub Szymanik
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Leonardo Bertolazzi | Manuel Vargas Guzmán | Raffaella Bernardi | Maciej Malicki | Jakub Szymanik
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) are increasingly evaluated on reasoning tasks, yet their logical abilities remain contested. To address this, we study LLMs’ reasoning in a well-defined fragment of logic: syllogistic reasoning. We cast the problem as premise selection and construct controlled datasets to isolate logical competence. Beyond evaluation, an open challenge is enabling LLMs to acquire abstract inference patterns that generalize to novel structures. We propose to apply few-shot meta-learning to this domain, thereby encouraging models to extract rules across tasks rather than memorize patterns within tasks. Although meta-learning has been little explored in the context of logic learnability, our experiments show that it is effective: small models (1.5B–7B) fine-tuned with meta-learning demonstrate strong gains in generalization, with especially pronounced benefits in low-data regimes. These meta-learned models outperform GPT-4o and o3-mini on our syllogistic reasoning task.
2024
Testing the limits of logical reasoning in neural and hybrid models
Manuel Vargas Guzmán | Jakub Szymanik | Maciej Malicki
Findings of the Association for Computational Linguistics: NAACL 2024
Manuel Vargas Guzmán | Jakub Szymanik | Maciej Malicki
Findings of the Association for Computational Linguistics: NAACL 2024
We study the ability of neural and hybrid models to generalize logical reasoning patterns. We created a series of tests for analyzing various aspects of generalization in the context of language and reasoning, focusing on compositionality and recursiveness. We used them to study the syllogistic logic in hybrid models, where the network assists in premise selection. We analyzed feed-forward, recurrent, convolutional, and transformer architectures. Our experiments demonstrate that even though the models can capture elementary aspects of the meaning of logical terms, they learn to generalize logical reasoning only to a limited degree.
2022
Prepositions Matter in Quantifier Scope Disambiguation
Aleksander Leczkowski | Justyna Grudzińska | Manuel Vargas Guzmán | Aleksander Wawer | Aleksandra Siemieniuk
Proceedings of the 29th International Conference on Computational Linguistics
Aleksander Leczkowski | Justyna Grudzińska | Manuel Vargas Guzmán | Aleksander Wawer | Aleksandra Siemieniuk
Proceedings of the 29th International Conference on Computational Linguistics
Although it is widely agreed that world knowledge plays a significant role in quantifier scope disambiguation (QSD), there has been only very limited work on how to integrate this knowledge into a QSD model. This paper contributes to this scarce line of research by incorporating into a machine learning model our knowledge about relations, as conveyed by a manageable closed class of function words: prepositions. For data, we use a scope-disambiguated corpus created by AnderBois, Brasoveanu and Henderson, which is additionally annotated with prepositional senses using Schneider et al’s Semantic Network of Adposition and Case Supersenses (SNACS) scheme. By applying Manshadi and Allen’s method to the corpus, we were able to inspect the information gain provided by prepositions for the QSD task. Statistical analysis of the performance of the classifiers, trained in scenarios with and without preposition information, supports the claim that prepositional senses have a strong positive impact on the learnability of automatic QSD systems.