Joyce Cahoon
2024
Improving LLM-based KGQA for multi-hop Question Answering with implicit reasoning in few-shot examples
Mili Shah
|
Joyce Cahoon
|
Mirco Milletari
|
Jing Tian
|
Fotis Psallidas
|
Andreas Mueller
|
Nick Litombe
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Large language models (LLMs) have shown remarkable capabilities in generating natural language texts for various tasks. However, using LLMs for question answering on knowledge graphs still remains a challenge, especially for questions requiring multi-hop reasoning. In this paper, we present a novel planned query guidance approach that improves large language model (LLM) performance in multi-hop question answering on knowledge graphs (KGQA). We do this by designing few-shot examples that implicitly demonstrate a systematic reasoning methodology to answer multi-hop questions. We evaluate our approach for two graph query languages, Cypher and SPARQL, and show that the queries generated using our strategy outperform the queries generated using a baseline LLM and typical few-shot examples by up to 24.66% and 7.7% in execution match accuracy for the MetaQA and the Spider benchmarks respectively. We also conduct an ablation study to analyze the incremental effects of the different techniques of designing few-shot examples. Our results suggest that our approach enables the LLM to effectively leverage the few-shot examples to generate queries for multi-hop KGQA.
Search
Co-authors
- Mili Shah 1
- Mirco Milletari 1
- Jing Tian 1
- Fotis Psallidas 1
- Andreas Mueller 1
- show all...