The Pitfalls of Defining Hallucination

Kees van Deemter


Abstract
Despite impressive advances in Natural Language Generation (NLG) and Large Language Models (LLMs), researchers are still unclear about important aspects of NLG evaluation. To substantiate this claim, I examine current classifications of hallucination and omission in data-text NLG, and I propose a logic-based synthesis of these classfications. I conclude by highlighting some remaining limitations of all current thinking about hallucination and by discussing implications for LLMs.
Anthology ID:
2024.cl-2.10
Volume:
Computational Linguistics, Volume 50, Issue 2 - June 2023
Month:
June
Year:
2024
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
807–816
Language:
URL:
https://aclanthology.org/2024.cl-2.10
DOI:
10.1162/coli_a_00509
Bibkey:
Cite (ACL):
Kees van Deemter. 2024. The Pitfalls of Defining Hallucination. Computational Linguistics, 50(2):807–816.
Cite (Informal):
The Pitfalls of Defining Hallucination (Deemter, CL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cl-2.10.pdf