A Human-Centric Evaluation Platform for Explainable Knowledge Graph Completion

Zhao Xu, Wiem Ben Rim, Kiril Gashteovski, Timo Sztyler, Carolin Lawrence


Abstract
Explanations for AI are expected to help human users understand AI-driven predictions. Evaluating plausibility, the helpfulness of the explanations, is therefore essential for developing eXplainable AI (XAI) that can really aid human users. Here we propose a human-centric evaluation platform to measure plausibility of explanations in the context of eXplainable Knowledge Graph Completion (XKGC). The target audience of the platform are researchers and practitioners who want to 1) investigate real needs and interests of their target users in XKGC, 2) evaluate the plausibility of the XKGC methods. We showcase these two use cases in an experimental setting to illustrate what results can be achieved with our system.
Anthology ID:
2024.eacl-demo.3
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Month:
March
Year:
2024
Address:
St. Julians, Malta
Editors:
Nikolaos Aletras, Orphee De Clercq
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18–26
Language:
URL:
https://aclanthology.org/2024.eacl-demo.3
DOI:
Bibkey:
Cite (ACL):
Zhao Xu, Wiem Ben Rim, Kiril Gashteovski, Timo Sztyler, and Carolin Lawrence. 2024. A Human-Centric Evaluation Platform for Explainable Knowledge Graph Completion. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 18–26, St. Julians, Malta. Association for Computational Linguistics.
Cite (Informal):
A Human-Centric Evaluation Platform for Explainable Knowledge Graph Completion (Xu et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-demo.3.pdf