No Need for Large-Scale Search: Exploring Large Language Models in Complex Knowledge Base Question Answering

Shouhui Wang, Biao Qin


Abstract
Knowledge Base Question Answering (KBQA) systems play a pivotal role in the domain of natural language processing and information retrieval. Its primary objective is to bridge the gap between natural language questions and structured knowledge representations, especially for complex KBQA. Despite the significant progress in developing effective and interconnected KBQA technologies, the recent emergence of large language models (LLMs) offers an opportunity to address the challenges faced by KBQA systems more efficiently. This study adopts the LLMs, such as Large Language Model Meta AI (LLaMA), as a channel to connect natural language questions with structured knowledge representations and proposes a Three-step Fine-tune Strategy based on large language model to implement the KBQA system (TFS-KBQA). This method achieves direct conversion from natural language questions to structured knowledge representations, thereby overcoming the limitations of existing KBQA methods, such as addressing large search and reasoning spaces and ranking massive candidates. To evaluate the effectiveness of the proposed method, we conduct experiments using three popular complex KBQA datasets. The results achieve state-of-the-art performance across all three datasets, with particularly notable results for the WebQuestionSP dataset, which achieves an F1 value of 79.9%.
Anthology ID:
2024.lrec-main.1074
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
12288–12299
Language:
URL:
https://aclanthology.org/2024.lrec-main.1074
DOI:
Bibkey:
Cite (ACL):
Shouhui Wang and Biao Qin. 2024. No Need for Large-Scale Search: Exploring Large Language Models in Complex Knowledge Base Question Answering. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12288–12299, Torino, Italia. ELRA and ICCL.
Cite (Informal):
No Need for Large-Scale Search: Exploring Large Language Models in Complex Knowledge Base Question Answering (Wang & Qin, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1074.pdf