Aligning Faithful Interpretations with their Social Attribution

Alon Jacovi, Yoav Goldberg


Abstract
We find that the requirement of model interpretations to be faithful is vague and incomplete. With interpretation by textual highlights as a case study, we present several failure cases. Borrowing concepts from social science, we identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution). We reformulate faithfulness as an accurate attribution of causality to the model, and introduce the concept of aligned faithfulness: faithful causal chains that are aligned with their expected social behavior. The two steps of causal attribution and social attribution together complete the process of explaining behavior. With this formalization, we characterize various failures of misaligned faithful highlight interpretations, and propose an alternative causal chain to remedy the issues. Finally, we implement highlight explanations of the proposed causal format using contrastive explanations.
Anthology ID:
2021.tacl-1.18
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
294–310
Language:
URL:
https://aclanthology.org/2021.tacl-1.18
DOI:
10.1162/tacl_a_00367
Bibkey:
Cite (ACL):
Alon Jacovi and Yoav Goldberg. 2021. Aligning Faithful Interpretations with their Social Attribution. Transactions of the Association for Computational Linguistics, 9:294–310.
Cite (Informal):
Aligning Faithful Interpretations with their Social Attribution (Jacovi & Goldberg, TACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.tacl-1.18.pdf
Video:
 https://aclanthology.org/2021.tacl-1.18.mp4