Defining Explanation in an AI Context

Tejaswani Verma, Christoph Lingenfelder, Dietrich Klakow


Abstract
With the increase in the use of AI systems, a need for explanation systems arises. Building an explanation system requires a definition of explanation. However, the natural language term explanation is difficult to define formally as it includes multiple perspectives from different domains such as psychology, philosophy, and cognitive sciences. We study multiple perspectives and aspects of explainability of recommendations or predictions made by AI systems, and provide a generic definition of explanation. The proposed definition is ambitious and challenging to apply. With the intention to bridge the gap between theory and application, we also propose a possible architecture of an automated explanation system based on our definition of explanation.
Anthology ID:
2020.blackboxnlp-1.29
Volume:
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupała, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
314–322
Language:
URL:
https://aclanthology.org/2020.blackboxnlp-1.29
DOI:
10.18653/v1/2020.blackboxnlp-1.29
Bibkey:
Cite (ACL):
Tejaswani Verma, Christoph Lingenfelder, and Dietrich Klakow. 2020. Defining Explanation in an AI Context. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 314–322, Online. Association for Computational Linguistics.
Cite (Informal):
Defining Explanation in an AI Context (Verma et al., BlackboxNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.blackboxnlp-1.29.pdf