Dan Yang
2024
Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs
Junjie Wang
|
Mingyang Chen
|
Binbin Hu
|
Dan Yang
|
Ziqi Liu
|
Yue Shen
|
Peng Wei
|
Zhiqiang Zhang
|
Jinjie Gu
|
Jun Zhou
|
Jeff Pan
|
Wen Zhang
|
Huajun Chen
Findings of the Association for Computational Linguistics: EMNLP 2024
Improving the performance of large language models (LLMs) in complex question-answering (QA) scenarios has always been a research focal point. Recent studies have attempted to enhance LLMs’ performance by combining step-wise planning with external retrieval. While effective for advanced models like GPT-3.5, smaller LLMs face challenges in decomposing complex questions, necessitating supervised fine-tuning. Previous work has relied on manual annotation and knowledge distillation from teacher LLMs, which are time-consuming and not accurate enough. In this paper, we introduce a novel framework for enhancing LLMs’ planning capabilities by using planning data derived from knowledge graphs (KGs). LLMs fine-tuned with this data have improved planning capabilities, better equipping them to handle complex QA tasks that involve retrieval. Evaluations on multiple datasets, including our newly proposed benchmark, highlight the effectiveness of our framework and the benefits of KG-derived planning data.
Search
Co-authors
- Junjie Wang 1
- Mingyang Chen 1
- Binbin Hu 1
- Ziqi Liu 1
- Yue Shen 1
- show all...