HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMs

Cem Uluoglakci, Tugba Temizel


Abstract
Hallucinations pose a significant challenge to the reliability and alignment of Large Language Models (LLMs), limiting their widespread acceptance beyond chatbot applications. Despite ongoing efforts, hallucinations remain a prevalent challenge in LLMs. The detection of hallucinations itself is also a formidable task, frequently requiring manual labeling or constrained evaluations. This paper introduces an automated scalable framework that combines benchmarking LLMs’ hallucination tendencies with efficient hallucination detection. We leverage LLMs to generate challenging tasks related to hypothetical phenomena, subsequently employing them as agents for efficient hallucination detection. The framework is domain-agnostic, allowing the use of any language model for benchmark creation or evaluation in any domain. We introduce the publicly available HypoTermQA Benchmarking Dataset, on which state-of-the-art models’ performance ranged between 3% and 11%, and evaluator agents demonstrated a 6% error rate in hallucination prediction. The proposed framework provides opportunities to test and improve LLMs. Additionally, it has the potential to generate benchmarking datasets tailored to specific domains, such as law, health, and finance.
Anthology ID:
2024.eacl-srw.9
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Neele Falk, Sara Papi, Mike Zhang
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
95–136
Language:
URL:
https://aclanthology.org/2024.eacl-srw.9
DOI:
Bibkey:
Cite (ACL):
Cem Uluoglakci and Tugba Temizel. 2024. HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMs. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 95–136, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMs (Uluoglakci & Temizel, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-srw.9.pdf