On Reducing Factual Hallucinations in Graph-to-Text Generation Using Large Language Models

Dmitrii Iarosh, Alexander Panchenko, Mikhail Salnikov


Abstract
Recent work in Graph-to-Text generation has achieved impressive results, but it still suffers from hallucinations in some cases, despite extensive pretraining stages and various methods for working with graph data. While the commonly used metrics for evaluating the quality of Graph-to-Text models show almost perfect results, it makes it challenging to compare different approaches. This paper demonstrates the challenges of recent Graph-to-Text systems in terms of hallucinations and proposes a simple yet effective approach to using a general LLM, which has shown state-of-the-art results and reduced the number of factual hallucinations. We provide step-by-step instructions on how to develop prompts for language models and a detailed analysis of potential factual errors in the generated text.
Anthology ID:
2025.genaik-1.5
Volume:
Proceedings of the Workshop on Generative AI and Knowledge Graphs (GenAIK)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Genet Asefa Gesese, Harald Sack, Heiko Paulheim, Albert Merono-Penuela, Lihu Chen
Venues:
GenAIK | WS
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
43–53
Language:
URL:
https://aclanthology.org/2025.genaik-1.5/
DOI:
Bibkey:
Cite (ACL):
Dmitrii Iarosh, Alexander Panchenko, and Mikhail Salnikov. 2025. On Reducing Factual Hallucinations in Graph-to-Text Generation Using Large Language Models. In Proceedings of the Workshop on Generative AI and Knowledge Graphs (GenAIK), pages 43–53, Abu Dhabi, UAE. International Committee on Computational Linguistics.
Cite (Informal):
On Reducing Factual Hallucinations in Graph-to-Text Generation Using Large Language Models (Iarosh et al., GenAIK 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.genaik-1.5.pdf