Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models

Hongbang Yuan, Pengfei Cao, Zhuoran Jin, Yubo Chen, Daojian Zeng, Kang Liu, Jun Zhao


Abstract
Large Language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations. A significant type of this issue is the false premise hallucination, which we define as the phenomenon when LLMs generate hallucinated text when confronted with false premise questions. In this paper, we perform a comprehensive analysis of the false premise hallucination and elucidate its internal working mechanism: a small subset of attention heads (which we designate as false premise heads) disturb the knowledge extraction process, leading to the occurrence of false premise hallucination. Based on our analysis, we propose FAITH (False premise Attention head constraIining for miTigating Hallucinations), a novel and effective method to mitigate false premise hallucinations. It constrains the false premise attention heads during the model inference process. Impressively, extensive experiments demonstrate that constraining only approximately 1% of the attention heads in the model yields a notable increase of nearly 20% of model performance.
Anthology ID:
2024.emnlp-main.155
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2670–2683
Language:
URL:
https://aclanthology.org/2024.emnlp-main.155
DOI:
Bibkey:
Cite (ACL):
Hongbang Yuan, Pengfei Cao, Zhuoran Jin, Yubo Chen, Daojian Zeng, Kang Liu, and Jun Zhao. 2024. Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2670–2683, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models (Yuan et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.155.pdf