Ruanmin Cao
2024
Augmenting Reasoning Capabilities of LLMs with Graph Structures in Knowledge Base Question Answering
Yuhang Tian
|
Dandan Song
|
Zhijing Wu
|
Changzhi Zhou
|
Hao Wang
|
Jun Yang
|
Jing Xu
|
Ruanmin Cao
|
HaoYu Wang
Findings of the Association for Computational Linguistics: EMNLP 2024
Recently, significant progress has been made in employing Large Language Models (LLMs) for semantic parsing to address Knowledge Base Question Answering (KBQA) tasks. Previous work utilize LLMs to generate query statements on Knowledge Bases (KBs) for retrieving answers. However, LLMs often generate incorrect query statements due to the lack of relevant knowledge in the previous methods. To address this, we propose a framework called Augmenting Reasoning Capabilities of LLMs with Graph Structures in Knowledge Base Question Answering (ARG-KBQA), which retrieves question-related graph structures to improve the performance of LLMs. Unlike other methods that directly retrieve relations or triples from KBs, we introduce an unsupervised two-stage ranker to perform multi-hop beam search on KBs, which could provide LLMs with more relevant information to the questions. Experimental results demonstrate that ARG-KBQA sets a new state-of-the-art on GrailQA and WebQSP under the few-shot setting. Additionally, ARG-KBQA significantly outperforms previous few-shot methods on questions with unseen query statement in the training data.
Search
Co-authors
- Yuhang Tian 1
- Dandan Song 1
- Zhijing Wu 1
- Changzhi Zhou 1
- Hao Wang 1
- show all...