Unsupervised Hallucination Detection by Inspecting Reasoning Processes

Ponhvoan Srey, Xiaobao Wu, Anh Tuan Luu


Abstract
Unsupervised hallucination detection aims to identify hallucinated content generated by large language models (LLMs) without relying on labeled data. While unsupervised methods have gained popularity by eliminating labor-intensive human annotations, they frequently rely on proxy signals unrelated to factual correctness. This misalignment biases detection probes toward superficial or non-truth-related aspects, limiting generalizability across datasets and scenarios. To overcome these limitations, we propose IRIS, an unsupervised hallucination detection framework, leveraging internal representations intrinsic to factual correctness. IRIS prompts the LLM to carefully verify the truthfulness of a given statement, and obtain its contextualized embedding as informative features for training. Meanwhile, the uncertainty of each response is considered a soft pseudolabel for truthfulness. Experimental results demonstrate that IRIS consistently outperforms existing unsupervised methods. Our approach is fully unsupervised, computationally low cost, and works well even with few training data, making it suitable for real-time detection.
Anthology ID:
2025.emnlp-main.1124
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22128–22140
Language:
URL:
https://aclanthology.org/2025.emnlp-main.1124/
DOI:
Bibkey:
Cite (ACL):
Ponhvoan Srey, Xiaobao Wu, and Anh Tuan Luu. 2025. Unsupervised Hallucination Detection by Inspecting Reasoning Processes. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 22128–22140, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Hallucination Detection by Inspecting Reasoning Processes (Srey et al., EMNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.emnlp-main.1124.pdf
Checklist:
 2025.emnlp-main.1124.checklist.pdf