Srikanta Bedathur Jagannath


2025

pdf bib
Aligning Complex Knowledge Graph Question Answering as Knowledge-Aware Constrained Code Generation
Prerna Agarwal | Nishant Kumar | Srikanta Bedathur Jagannath
Proceedings of the 31st International Conference on Computational Linguistics

Generating executable logical forms (LF) using Large Language Models (LLMs) in a few-shot setting for Knowledge Graph Question Answering (KGQA) is becoming popular. However, their performance is still limited due to very little exposure to the LF during pre-training of LLMs, resulting in many syntactically incorrect LF generation. If the LF generation task can be transformed to a more familiar task for the LLM, it can potentially reduce the syntax errors and elevate the generation quality. On the other hand, there exist specialized LLMs trained/fine-tuned on code in many programming languages. They can be leveraged to generate the LF as step-wise constrained code expression generation using modular functions in the LF. Based on this insight, we propose CodeAlignKGQA: a framework that aligns the LF generation as code generation that incorporates LF-specific constraints. We extract the question-specific subgraph information to enable Knowledge-Aware code generation. We additionally introduce a dynamic self-code-correction mechanism, to be applied as required. Our extensive experiments on Complex KGQA benchmarks such as KQA Pro demonstrate the effectiveness of our approach. CodeAlignKGQA surpasses all few-shot baselines on KQA Pro by 21%, achieving a new state-of-the-art.