Evaluating Generalization Capability of Language Models across Abductive, Deductive and Inductive Logical Reasoning

Yu Sheng, Wanting Wen, Linjing Li, Daniel Zeng


Abstract
Transformer-based language models (LMs) have demonstrated remarkable performance on many natural language tasks, yet to what extent LMs possess the capability of generalizing to unseen logical rules remains not explored sufficiently. In classical logic category, abductive, deductive and inductive (ADI) reasoning are defined as the fundamental reasoning types, sharing the identical reasoning primitives and properties, and some research have proposed that there exists mutual generalization across them. However, in the field of natural language processing, previous research generally study LMs’ ADI reasoning capabilities separately, overlooking the generalization across them. To bridge this gap, we propose UniADILR, a novel logical reasoning dataset crafted for assessing the generalization capabilities of LMs across different logical rules. Based on UniADILR, we conduct extensive investigations from various perspectives of LMs’ performance on ADI reasoning. The experimental results reveal the weakness of current LMs in terms of extrapolating to unseen rules and inspire a new insight for future research in logical reasoning.
Anthology ID:
2025.coling-main.330
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4945–4957
Language:
URL:
https://aclanthology.org/2025.coling-main.330/
DOI:
Bibkey:
Cite (ACL):
Yu Sheng, Wanting Wen, Linjing Li, and Daniel Zeng. 2025. Evaluating Generalization Capability of Language Models across Abductive, Deductive and Inductive Logical Reasoning. In Proceedings of the 31st International Conference on Computational Linguistics, pages 4945–4957, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Evaluating Generalization Capability of Language Models across Abductive, Deductive and Inductive Logical Reasoning (Sheng et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.330.pdf