Testing the limits of logical reasoning in neural and hybrid models

Manuel Guzman, Jakub Szymanik, Maciej Malicki


Abstract
We study the ability of neural and hybrid models to generalize logical reasoning patterns. We created a series of tests for analyzing various aspects of generalization in the context of language and reasoning, focusing on compositionality and recursiveness. We used them to study the syllogistic logic in hybrid models, where the network assists in premise selection. We analyzed feed-forward, recurrent, convolutional, and transformer architectures. Our experiments demonstrate that even though the models can capture elementary aspects of the meaning of logical terms, they learn to generalize logical reasoning only to a limited degree.
Anthology ID:
2024.findings-naacl.147
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2267–2279
Language:
URL:
https://aclanthology.org/2024.findings-naacl.147
DOI:
Bibkey:
Cite (ACL):
Manuel Guzman, Jakub Szymanik, and Maciej Malicki. 2024. Testing the limits of logical reasoning in neural and hybrid models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2267–2279, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Testing the limits of logical reasoning in neural and hybrid models (Guzman et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.147.pdf
Copyright:
 2024.findings-naacl.147.copyright.pdf