A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning

Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu, Changshui Zhang


Abstract
Logical reasoning has been an ongoing pursuit in the field of AI. Despite significant advancements made by large language models (LLMs), they still struggle with complex logical reasoning problems. To enhance reasoning performance, one promising direction is scalable oversight, which requires LLMs to identify their own errors and then improve by themselves. Various self-verification methods have been proposed in pursuit of this goal. Nevertheless, whether existing models understand their own errors well is still under investigation. In this paper, we take a closer look at the self-verification abilities of LLMs in the context of logical reasoning, focusing on their ability to identify logical fallacies accurately. We introduce a dataset, FALLACIES, containing 232 types of reasoning fallacies categorized in a hierarchical taxonomy. By conducting exhaustive experiments on FALLACIES, we obtain comprehensive and detailed analyses of a series of models on their verification abilities. Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods. Drawing from these observations, we offer suggestions for future research and practical applications of self-verification methods.
Anthology ID:
2024.naacl-long.52
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
900–925
Language:
URL:
https://aclanthology.org/2024.naacl-long.52
DOI:
Bibkey:
Cite (ACL):
Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu, and Changshui Zhang. 2024. A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 900–925, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning (Hong et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.52.pdf
Copyright:
 2024.naacl-long.52.copyright.pdf