Explaining Bayesian Networks in Natural Language: State of the Art and Challenges

Conor Hennessy, Alberto Bugarín, Ehud Reiter


Abstract
In order to increase trust in the usage of Bayesian Networks and to cement their role as a model which can aid in critical decision making, the challenge of explainability must be faced. Previous attempts at explaining Bayesian Networks have largely focused on graphical or visual aids. In this paper we aim to highlight the importance of a natural language approach to explanation and to discuss some of the previous and state of the art attempts of the textual explanation of Bayesian Networks. We outline several challenges that remain to be addressed in the generation and validation of natural language explanations of Bayesian Networks. This can serve as a reference for future work on natural language explanations of Bayesian Networks.
Anthology ID:
2020.nl4xai-1.7
Volume:
2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Month:
November
Year:
2020
Address:
Dublin, Ireland
Editors:
Jose M. Alonso, Alejandro Catala
Venue:
NL4XAI
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
28–33
Language:
URL:
https://aclanthology.org/2020.nl4xai-1.7
DOI:
Bibkey:
Cite (ACL):
Conor Hennessy, Alberto Bugarín, and Ehud Reiter. 2020. Explaining Bayesian Networks in Natural Language: State of the Art and Challenges. In 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pages 28–33, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Explaining Bayesian Networks in Natural Language: State of the Art and Challenges (Hennessy et al., NL4XAI 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nl4xai-1.7.pdf