On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?

Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, Siva Reddy


Abstract
Knowledge-grounded conversational models are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination. In this work, we investigate the underlying causes of this phenomenon: is hallucination due to the training data, or to the models? We conduct a comprehensive human study on both existing knowledge-grounded conversational benchmarks and several state-of-the-art models. Our study reveals that the standard benchmarks consist of > 60% hallucinated responses, leading to models that not only hallucinate but even amplify hallucinations. Our findings raise important questions on the quality of existing datasets and models trained using them. We make our annotations publicly available for future research.
Anthology ID:
2022.naacl-main.387
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5271–5285
Language:
URL:
https://aclanthology.org/2022.naacl-main.387
DOI:
10.18653/v1/2022.naacl-main.387
Bibkey:
Cite (ACL):
Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, and Siva Reddy. 2022. On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5271–5285, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? (Dziri et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.387.pdf
Code
 mcgill-nlp/faithdial
Data
Wizard of Wikipedia