Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge

Angus Addlesee


Abstract
When deploying LLMs in certain commercial or research settings, domain specific knowledge must be explicitly provided within the prompt. This in-prompt knowledge can conflict with an LLM’s static world knowledge learned at pre-training, causing model hallucination (see examples in Table 1). In safety-critical settings, like healthcare and finance, these hallucinations can harm vulnerable users. We have curated a QA corpus containing information that LLMs could not have seen at pre-training. Using our corpus, we have probed various LLMs, manipulating both the prompt and the knowledge representation. We have found that our ‘Jodie’ prompt consistently improves the model’s textual grounding to the given knowledge, and in-turn the overall answer accuracy. This is true in both the healthcare and finance domains - improving accuracy by up to 28% (mean: 12%). We have also identified that hierarchical and direct node-property graph structures could lead to more interpretable and controllable systems that provide a natural language interface with real-time in-domain knowledge. Our corpus will enable further work on this critical challenge.
Anthology ID:
2024.safety4convai-1.1
Volume:
Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Tanvi Dinkar, Giuseppe Attanasio, Amanda Cercas Curry, Ioannis Konstas, Dirk Hovy, Verena Rieser
Venues:
Safety4ConvAI | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
1–7
Language:
URL:
https://aclanthology.org/2024.safety4convai-1.1
DOI:
Bibkey:
Cite (ACL):
Angus Addlesee. 2024. Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge. In Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024, pages 1–7, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge (Addlesee, Safety4ConvAI-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.safety4convai-1.1.pdf