Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations

Sameen Maruf, Ingrid Zukerman, Ehud Reiter, Gholamreza Haffari


Abstract
We offer an approach to explain Decision Tree (DT) predictions by addressing potential conflicts between aspects of these predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize their identification, and specify explanatory schemas that address them. Our human evaluation focused on the effect of explanations on users’ understanding of a DT’s reasoning and their willingness to act on its predictions. The results show that (1) explanations that address potential conflicts are considered at least as good as baseline explanations that just follow a DT path; and (2) the conflict-based explanations are deemed especially valuable when users’ expectations disagree with the DT’s predictions.
Anthology ID:
2021.inlg-1.12
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–127
Language:
URL:
https://aclanthology.org/2021.inlg-1.12
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.inlg-1.12.pdf