JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering

Yueqing Sun, Qi Shi, Le Qi, Yu Zhang


Abstract
Existing KG-augmented models for commonsense question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs). However, they ignore (i) the effectively fusing and reasoning over question context representations and the KG representations, and (ii) automatically selecting relevant nodes from the noisy KGs during reasoning. In this paper, we propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LM and GNN and the dynamic KGs pruning mechanism. Specifically, JointLK performs joint reasoning between LM and GNN through a novel dense bidirectional attention module, in which each question token attends on KG nodes and each KG node attends on question tokens, and the two modal representations fuse and update mutually by multi-step interactions. Then, the dynamic pruning module uses the attention weights generated by joint reasoning to prune irrelevant KG nodes recursively. We evaluate JointLK on the CommonsenseQA and OpenBookQA datasets, and demonstrate its improvements to the existing LM and LM+KG models, as well as its capability to perform interpretable reasoning.
Anthology ID:
2022.naacl-main.372
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5049–5060
Language:
URL:
https://aclanthology.org/2022.naacl-main.372
DOI:
10.18653/v1/2022.naacl-main.372
Bibkey:
Cite (ACL):
Yueqing Sun, Qi Shi, Le Qi, and Yu Zhang. 2022. JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5049–5060, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering (Sun et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.372.pdf
Video:
 https://aclanthology.org/2022.naacl-main.372.mp4
Code
 yueqing-sun/jointlk
Data
CommonsenseQAConceptNetOpenBookQA