Improving LLM-based KGQA for multi-hop Question Answering with implicit reasoning in few-shot examples

Mili Shah, Joyce Cahoon, Mirco Milletari, Jing Tian, Fotis Psallidas, Andreas Mueller, Nick Litombe


Abstract
Large language models (LLMs) have shown remarkable capabilities in generating natural language texts for various tasks. However, using LLMs for question answering on knowledge graphs still remains a challenge, especially for questions requiring multi-hop reasoning. In this paper, we present a novel planned query guidance approach that improves large language model (LLM) performance in multi-hop question answering on knowledge graphs (KGQA). We do this by designing few-shot examples that implicitly demonstrate a systematic reasoning methodology to answer multi-hop questions. We evaluate our approach for two graph query languages, Cypher and SPARQL, and show that the queries generated using our strategy outperform the queries generated using a baseline LLM and typical few-shot examples by up to 24.66% and 7.7% in execution match accuracy for the MetaQA and the Spider benchmarks respectively. We also conduct an ablation study to analyze the incremental effects of the different techniques of designing few-shot examples. Our results suggest that our approach enables the LLM to effectively leverage the few-shot examples to generate queries for multi-hop KGQA.
Anthology ID:
2024.kallm-1.13
Volume:
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Russa Biswas, Lucie-Aimée Kaffee, Oshin Agarwal, Pasquale Minervini, Sameer Singh, Gerard de Melo
Venues:
KaLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
125–135
Language:
URL:
https://aclanthology.org/2024.kallm-1.13
DOI:
10.18653/v1/2024.kallm-1.13
Bibkey:
Cite (ACL):
Mili Shah, Joyce Cahoon, Mirco Milletari, Jing Tian, Fotis Psallidas, Andreas Mueller, and Nick Litombe. 2024. Improving LLM-based KGQA for multi-hop Question Answering with implicit reasoning in few-shot examples. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024), pages 125–135, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Improving LLM-based KGQA for multi-hop Question Answering with implicit reasoning in few-shot examples (Shah et al., KaLLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.kallm-1.13.pdf