Evaluating Explanations: How Much Do Explanations from the Teacher Aid Students?

Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen


Abstract
While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated. In this work, we introduce a framework to quantify the value of explanations via the accuracy gains that they confer on a student model trained to simulate a teacher model. Crucially, the explanations are available to the student during training, but are not available at test time. Compared with prior proposals, our approach is less easily gamed, enabling principled, automatic, model-agnostic evaluation of attributions. Using our framework, we compare numerous attribution methods for text classification and question answering, and observe quantitative differences that are consistent (to a moderate to high degree) across different student model architectures and learning strategies.1
Anthology ID:
2022.tacl-1.21
Volume:
Transactions of the Association for Computational Linguistics, Volume 10
Month:
Year:
2022
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
359–375
Language:
URL:
https://aclanthology.org/2022.tacl-1.21
DOI:
10.1162/tacl_a_00465
Bibkey:
Cite (ACL):
Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, and William W. Cohen. 2022. Evaluating Explanations: How Much Do Explanations from the Teacher Aid Students?. Transactions of the Association for Computational Linguistics, 10:359–375.
Cite (Informal):
Evaluating Explanations: How Much Do Explanations from the Teacher Aid Students? (Pruthi et al., TACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.tacl-1.21.pdf
Video:
 https://aclanthology.org/2022.tacl-1.21.mp4