Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning

Mayur Patidar, Riya Sawhney, Avinash Singh, Biswajit Chatterjee, Mausam ., Indrajit Bhattacharya


Abstract
Existing Knowledge Base Question Answering (KBQA) architectures are hungry for annotated data, which make them costly and time-consuming to deploy. We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. We propose a novel KBQA architecture called FuSIC-KBQA that performs KB-retrieval using multiple source-trained retrievers, re-ranks using an LLM and uses this as input for LLM few-shot in-context learning to generate logical forms, which are further refined using execution-guided feedback. Experiments over four source-target KBQA pairs of varying complexity show that FuSIC-KBQA significantly outperforms adaptations of SoTA KBQA models for this setting. Additional experiments in the in-domain setting show that FuSIC-KBQA also outperforms SoTA KBQA models when training data is limited.
Anthology ID:
2024.acl-long.495
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9147–9165
Language:
URL:
https://aclanthology.org/2024.acl-long.495
DOI:
Bibkey:
Cite (ACL):
Mayur Patidar, Riya Sawhney, Avinash Singh, Biswajit Chatterjee, Mausam ., and Indrajit Bhattacharya. 2024. Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9147–9165, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning (Patidar et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.495.pdf