Reason from Fallacy: Enhancing Large Language Models’ Logical Reasoning through Logical Fallacy Understanding

Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang


Abstract
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning. One non-negligible reason for LLMs’ suboptimal performance on logical reasoning is their overlooking of understanding logical fallacies correctly. To evaluate LLMs’ capability of logical fallacy understanding (LFU), we propose five concrete tasks from three cognitive dimensions of WHAT, WHY, and HOW in this paper. Towards these LFU tasks, we have successfully constructed a new dataset LFUD based on GPT-4 accompanied by a little human effort. Our extensive experiments justify that our LFUD can be used not only to evaluate LLMs’ LFU capability, but also to fine-tune LLMs to obtain significantly enhanced performance on logical reasoning.
Anthology ID:
2024.findings-naacl.192
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3053–3066
Language:
URL:
https://aclanthology.org/2024.findings-naacl.192
DOI:
Bibkey:
Cite (ACL):
Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, and Deqing Yang. 2024. Reason from Fallacy: Enhancing Large Language Models’ Logical Reasoning through Logical Fallacy Understanding. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3053–3066, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Reason from Fallacy: Enhancing Large Language Models’ Logical Reasoning through Logical Fallacy Understanding (Li et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.192.pdf
Copyright:
 2024.findings-naacl.192.copyright.pdf