Trust Me, I’m Wrong: LLMs Hallucinate with Certainty Despite Knowing the Answer

Adi Simhi, Itay Itzhak, Fazl Barez, Gabriel Stanovsky, Yonatan Belinkov


Abstract
Prior work on large language model (LLM) hallucinations has associated them with model uncertainty or inaccurate knowledge. In this work, we define and investigate a distinct type of hallucination, where a model can consistently answer a question correctly, but a seemingly trivial perturbation, which can happen in real-world settings, causes it to produce a hallucinated response with high certainty. This phenomenon, which we dub CHOKE (Certain Hallucinations Overriding Known Evidence), is particularly concerning in high-stakes domains such as medicine or law, where model certainty is often used as a proxy for reliability. We show that CHOKE examples are consistent across prompts, occur in different models and datasets, and are fundamentally distinct from other hallucinations. This difference leads existing mitigation methods to perform worse on CHOKE examples than on general hallucinations. Finally, we introduce a probing-based mitigation that outperforms existing methods on CHOKE hallucinations. These findings reveal an overlooked aspect of hallucinations, emphasizing the need to understand their origins and improve mitigation strategies to enhance LLM safety.
Anthology ID:
2025.findings-emnlp.792
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14665–14688
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.792/
DOI:
Bibkey:
Cite (ACL):
Adi Simhi, Itay Itzhak, Fazl Barez, Gabriel Stanovsky, and Yonatan Belinkov. 2025. Trust Me, I’m Wrong: LLMs Hallucinate with Certainty Despite Knowing the Answer. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 14665–14688, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Trust Me, I’m Wrong: LLMs Hallucinate with Certainty Despite Knowing the Answer (Simhi et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.792.pdf
Checklist:
 2025.findings-emnlp.792.checklist.pdf