Zero-Resource Hallucination Prevention for Large Language Models

Junyu Luo, Cao Xiao, Fenglong Ma


Abstract
The prevalent use of large language models (LLMs) in various domains has drawn attention to the issue of “hallucination”, which refers to instances where LLMs generate factually inaccurate or ungrounded information. Existing techniques usually identify hallucinations post-generation that cannot prevent their occurrence and suffer from inconsistent performance due to the influence of the instruction format and model style. In this paper, we introduce a novel pre-detection self-evaluation technique, referred to as SELF-FAMILIARITY, which focuses on evaluating the model’s familiarity with the concepts present in the input instruction and withholding the generation of response in case of unfamiliar concepts under the zero-resource setting, where external ground-truth or background information is not available. We also propose a new dataset Concept-7 focusing on the hallucinations caused by limited inner knowledge. We validate SELF-FAMILIARITY across four different large language models, demonstrating consistently superior performance compared to existing techniques. Our findings propose a significant shift towards preemptive strategies for hallucination mitigation in LLM assistants, promising improvements in reliability, applicability, and interpretability.
Anthology ID:
2024.findings-emnlp.204
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3586–3602
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.204
DOI:
Bibkey:
Cite (ACL):
Junyu Luo, Cao Xiao, and Fenglong Ma. 2024. Zero-Resource Hallucination Prevention for Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3586–3602, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Zero-Resource Hallucination Prevention for Large Language Models (Luo et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.204.pdf
Software:
 2024.findings-emnlp.204.software.zip
Data:
 2024.findings-emnlp.204.data.zip