Learning to Explain Selectively: A Case Study on Question Answering

Shi Feng, Jordan Boyd-Graber


Abstract
Explanations promise to bridge the gap between humans and AI, yet it remains difficult to achieve consistent improvement in AI-augmented human decision making. The usefulness of AI explanations depends on many factors, and always showing the same type of explanation in all cases is suboptimal—so is relying on heuristics to adapt explanations for each scenario. We propose learning to explain”selectively”: for each decision that the user makes, we use a model to choose the best explanation from a set of candidates and update this model with feedback to optimize human performance. We experiment on a question answering task, Quizbowl, and show that selective explanations improve human performance for both experts and crowdworkers.
Anthology ID:
2022.emnlp-main.573
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8372–8382
Language:
URL:
https://aclanthology.org/2022.emnlp-main.573
DOI:
10.18653/v1/2022.emnlp-main.573
Bibkey:
Cite (ACL):
Shi Feng and Jordan Boyd-Graber. 2022. Learning to Explain Selectively: A Case Study on Question Answering. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8372–8382, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Learning to Explain Selectively: A Case Study on Question Answering (Feng & Boyd-Graber, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.573.pdf