An Empirical Study on Explanations in Out-of-Domain Settings

George Chrysostomou, Nikolaos Aletras


Abstract
Recent work in Natural Language Processing has focused on developing approaches that extract faithful explanations, either via identifying the most important tokens in the input (i.e. post-hoc explanations) or by designing inherently faithful models that first select the most important tokens and then use them to predict the correct label (i.e. select-then-predict models). Currently, these approaches are largely evaluated on in-domain settings. Yet, little is known about how post-hoc explanations and inherently faithful models perform in out-of-domain settings. In this paper, we conduct an extensive empirical study that examines: (1) the out-of-domain faithfulness of post-hoc explanations, generated by five feature attribution methods; and (2) the out-of-domain performance of two inherently faithful models over six datasets. Contrary to our expectations, results show that in many cases out-of-domain post-hoc explanation faithfulness measured by sufficiency and comprehensiveness is higher compared to in-domain. We find this misleading and suggest using a random baseline as a yardstick for evaluating post-hoc explanation faithfulness. Our findings also show that select-then predict models demonstrate comparable predictive performance in out-of-domain settings to full-text trained models.
Anthology ID:
2022.acl-long.477
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6920–6938
Language:
URL:
https://aclanthology.org/2022.acl-long.477
DOI:
10.18653/v1/2022.acl-long.477
Bibkey:
Cite (ACL):
George Chrysostomou and Nikolaos Aletras. 2022. An Empirical Study on Explanations in Out-of-Domain Settings. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6920–6938, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study on Explanations in Out-of-Domain Settings (Chrysostomou & Aletras, ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.477.pdf
Software:
 2022.acl-long.477.software.zip
Video:
 https://aclanthology.org/2022.acl-long.477.mp4
Code
 gchrysostomou/ood_faith
Data
IMDb Movie ReviewsSST