Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs

Yi Fang, Moxin Li, Wenjie Wang, Lin Hui, Fuli Feng


Abstract
Large Language Models (LLMs) excel in various natural language processing tasks but struggle with hallucination issues. Existing solutions have considered utilizing LLMs’ inherent reasoning abilities to alleviate hallucination, such as self-correction and diverse sampling methods. However, these methods often overtrust LLMs’ initial answers due to inherent biases. The key to alleviating this issue lies in overriding LLMs’ inherent biases for answer inspection. To this end, we propose a CounterFactual Multi-Agent Debate (CFMAD) framework. CFMAD presets the stances of LLMs to override their inherent biases by compelling LLMs to generate justifications for a predetermined answer’s correctness. The LLMs with different predetermined stances are engaged with a skeptical critic for counterfactual debate on the rationality of generated justifications. Finally, the debate process is evaluated by a third-party judge to determine the final answer. Extensive experiments on four datasets of three tasks demonstrate the superiority of CFMAD over existing methods.
Anthology ID:
2025.coling-main.703
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10554–10568
Language:
URL:
https://aclanthology.org/2025.coling-main.703/
DOI:
Bibkey:
Cite (ACL):
Yi Fang, Moxin Li, Wenjie Wang, Lin Hui, and Fuli Feng. 2025. Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10554–10568, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs (Fang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.703.pdf