Cheston Tan
2024
LLM-Based Multi-Hop Question Answering with Knowledge Graph Integration in Evolving Environments
Ruirui Chen
|
Weifeng Jiang
|
Chengwei Qin
|
Ishaan Rawal
|
Cheston Tan
|
Dongkyu Choi
|
Bo Xiong
|
Bo Ai
Findings of the Association for Computational Linguistics: EMNLP 2024
The important challenge of keeping knowledge in Large Language Models (LLMs) up-to-date has led to the development of various methods for incorporating new facts. However, existing methods for such knowledge editing still face difficulties with multi-hop questions that require accurate fact identification and sequential logical reasoning, particularly among numerous fact updates. To tackle these challenges, this paper introduces Graph Memory-based Editing for Large Language Models (GMeLLo), a straightforward and effective method that merges the explicit knowledge representation of Knowledge Graphs (KGs) with the linguistic flexibility of LLMs. Beyond merely leveraging LLMs for question answering, GMeLLo employs these models to convert free-form language into structured queries and fact triples, facilitating seamless interaction with KGs for rapid updates and precise multi-hop reasoning. Our results show that GMeLLo significantly surpasses current state-of-the-art (SOTA) knowledge editing methods in the multi-hop question answering benchmark, MQuAKE, especially in scenarios with extensive knowledge edits.
Search
Co-authors
- Ruirui Chen 1
- Weifeng Jiang 1
- Chengwei Qin 1
- Ishaan Rawal 1
- Dongkyu Choi 1
- show all...