When to explain: Identifying explanation triggers in human-agent interaction

Lea Krause, Piek Vossen


Abstract
With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in human-agent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.
Anthology ID:
2020.nl4xai-1.12
Volume:
2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Month:
November
Year:
2020
Address:
Dublin, Ireland
Editors:
Jose M. Alonso, Alejandro Catala
Venue:
NL4XAI
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
55–60
Language:
URL:
https://aclanthology.org/2020.nl4xai-1.12
DOI:
Bibkey:
Cite (ACL):
Lea Krause and Piek Vossen. 2020. When to explain: Identifying explanation triggers in human-agent interaction. In 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pages 55–60, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
When to explain: Identifying explanation triggers in human-agent interaction (Krause & Vossen, NL4XAI 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nl4xai-1.12.pdf