Knowledge Graph-augmented Language Models for Complex Question Answering

Priyanka Sen, Sandeep Mavadia, Amir Saffari


Abstract
Large language models have shown impressive abilities to reason over input text, however, they are prone to hallucinations. On the other hand, end-to-end knowledge graph question answering (KGQA) models output responses grounded in facts, but they still struggle with complex reasoning, such as comparison or ordinal questions. In this paper, we propose a new method for complex question answering where we combine a knowledge graph retriever based on an end-to-end KGQA model with a language model that reasons over the retrieved facts to return an answer. We observe that augmenting language model prompts with retrieved KG facts improves performance over using a language model alone by an average of 83%. In particular, we see improvements on complex questions requiring count, intersection, or multi-hop reasoning operations.
Anthology ID:
2023.nlrse-1.1
Volume:
Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)
Month:
June
Year:
2023
Address:
Toronto, Canada
Editors:
Bhavana Dalvi Mishra, Greg Durrett, Peter Jansen, Danilo Neves Ribeiro, Jason Wei
Venue:
NLRSE
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–8
Language:
URL:
https://aclanthology.org/2023.nlrse-1.1
DOI:
10.18653/v1/2023.nlrse-1.1
Bibkey:
Cite (ACL):
Priyanka Sen, Sandeep Mavadia, and Amir Saffari. 2023. Knowledge Graph-augmented Language Models for Complex Question Answering. In Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE), pages 1–8, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Knowledge Graph-augmented Language Models for Complex Question Answering (Sen et al., NLRSE 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.nlrse-1.1.pdf