Do Language Models Know When They’re Hallucinating References?

Ayush Agrawal, Mirac Suzgun, Lester Mackey, Adam Kalai


Abstract
State-of-the-art language models (LMs) are notoriously susceptible to generating hallucinated information. Such inaccurate outputs not only undermine the reliability of these models but also limit their use and raise serious concerns about misinformation and propaganda. In this work, we focus on hallucinated book and article references and present them as the “model organism” of language model hallucination research, due to their frequent and easy-to-discern nature. We posit that if a language model cites a particular reference in its output, then it should ideally possess sufficient information about its authors and content, among other relevant details. Using this basic insight, we illustrate that one can identify hallucinated references without ever consulting any external resources, by asking a set of direct or indirect queries to the language model about the references. These queries can be considered as “consistency checks.” Our findings highlight that while LMs, including GPT-4, often produce inconsistent author lists for hallucinated references, they also often accurately recall the authors of real references. In this sense, the LM can be said to “know” when it is hallucinating references. Furthermore, these findings show how hallucinated references can be dissected to shed light on their nature.
Anthology ID:
2024.findings-eacl.62
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
912–928
Language:
URL:
https://aclanthology.org/2024.findings-eacl.62
DOI:
Bibkey:
Cite (ACL):
Ayush Agrawal, Mirac Suzgun, Lester Mackey, and Adam Kalai. 2024. Do Language Models Know When They’re Hallucinating References?. In Findings of the Association for Computational Linguistics: EACL 2024, pages 912–928, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Do Language Models Know When They’re Hallucinating References? (Agrawal et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.62.pdf
Software:
 2024.findings-eacl.62.software.zip
Note:
 2024.findings-eacl.62.note.zip