Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation

Jiwon Jeong, Hyeju Jang, Hogun Park


Abstract
The advancement of Large Language Models (LLMs) has greatly improved our ability to process complex language. However, accurately detecting logical fallacies remains a significant challenge. This study presents a novel and effective prompt formulation approach for logical fallacy detection, applicable in both supervised (fine-tuned) and unsupervised (zero-shot) settings. Our method enriches input text by incorporating implicit contextual information—counterarguments, explanations, and goals—which we query for validity within the argument’s context. We then rank these queries based on confidence scores to inform classification. We evaluate our approach across multiple datasets from 5 domains, covering 29 distinct fallacy types, using models from GPT and LLaMA series. The results show substantial improvements over state-of-the-art models: up to a 0.57 increase in F1-score in zero-shot settings and up to 0.45 in fine-tuned models. Extensive analyses further illustrate why and how our method excels.
Anthology ID:
2025.findings-naacl.384
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6918–6937
Language:
URL:
https://aclanthology.org/2025.findings-naacl.384/
DOI:
Bibkey:
Cite (ACL):
Jiwon Jeong, Hyeju Jang, and Hogun Park. 2025. Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6918–6937, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation (Jeong et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-naacl.384.pdf