Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering

Weihe Zhai, Arkaitz Zubiaga, Bingquan Liu, Chengjie Sun, Yalong Zhao


Abstract
The fusion of language models (LMs) and knowledge graphs (KGs) is widely used in commonsense question answering, but generating faithful explanations remains challenging. Current methods often overlook path decoding faithfulness, leading to divergence between graph encoder outputs and model predictions. We identify confounding effects and LM-KG misalignment as key factors causing spurious explanations. To address this, we introduce the LM-KG Fidelity metric to assess KG representation reliability and propose the LM-KG Distribution-aware Alignment (LKDA) algorithm to improve explanation faithfulness. Without ground truth, we evaluate KG explanations using the proposed Fidelity-Sparsity Trade-off Curve. Experiments on CommonsenseQA and OpenBookQA show that LKDA significantly enhances explanation fidelity and model performance, highlighting the need to address distributional misalignment for reliable commonsense reasoning.
Anthology ID:
2024.emnlp-main.1052
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18920–18930
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1052
DOI:
10.18653/v1/2024.emnlp-main.1052
Bibkey:
Cite (ACL):
Weihe Zhai, Arkaitz Zubiaga, Bingquan Liu, Chengjie Sun, and Yalong Zhao. 2024. Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18920–18930, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering (Zhai et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1052.pdf