Learn Together: Joint Multitask Finetuning of Pretrained KG-enhanced LLM for Downstream Tasks

Anastasia Martynova, Vladislav Tishin, Natalia Semenova


Abstract
Recent studies have shown that a knowledge graph (KG) can enhance text data by providing structured background knowledge, which can significantly improve the language understanding skills of the LLM. Besides, finetuning of such models shows solid results on commonsense reasoning benchmarks. In this work, we introduce expandable Joint Multitask Finetuning on Pretrained KG-enchanced LLM approach for Question Answering (QA), Machine Reading Comprehension (MRC) and Knowledge Graph Question Answering (KGQA) tasks. Extensive experiments show competitive performance of joint finetuning QA+MRC+KGQA over single task approach with a maximum gain of 30% accuracy.
Anthology ID:
2025.genaik-1.2
Volume:
Proceedings of the Workshop on Generative AI and Knowledge Graphs (GenAIK)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Genet Asefa Gesese, Harald Sack, Heiko Paulheim, Albert Merono-Penuela, Lihu Chen
Venues:
GenAIK | WS
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
13–19
Language:
URL:
https://aclanthology.org/2025.genaik-1.2/
DOI:
Bibkey:
Cite (ACL):
Anastasia Martynova, Vladislav Tishin, and Natalia Semenova. 2025. Learn Together: Joint Multitask Finetuning of Pretrained KG-enhanced LLM for Downstream Tasks. In Proceedings of the Workshop on Generative AI and Knowledge Graphs (GenAIK), pages 13–19, Abu Dhabi, UAE. International Committee on Computational Linguistics.
Cite (Informal):
Learn Together: Joint Multitask Finetuning of Pretrained KG-enhanced LLM for Downstream Tasks (Martynova et al., GenAIK 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.genaik-1.2.pdf