SHAP-Based Explanation Methods: A Review for NLP Interpretability

Edoardo Mosca, Ferenc Szigeti, Stella Tragianni, Daniel Gallagher, Georg Groh


Abstract
Model explanations are crucial for the transparent, safe, and trustworthy deployment of machine learning models. The SHapley Additive exPlanations (SHAP) framework is considered by many to be a gold standard for local explanations thanks to its solid theoretical background and general applicability. In the years following its publication, several variants appeared in the literature—presenting adaptations in the core assumptions and target applications. In this work, we review all relevant SHAP-based interpretability approaches available to date and provide instructive examples as well as recommendations regarding their applicability to NLP use cases.
Anthology ID:
2022.coling-1.406
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4593–4603
Language:
URL:
https://aclanthology.org/2022.coling-1.406
DOI:
Bibkey:
Cite (ACL):
Edoardo Mosca, Ferenc Szigeti, Stella Tragianni, Daniel Gallagher, and Georg Groh. 2022. SHAP-Based Explanation Methods: A Review for NLP Interpretability. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4593–4603, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
SHAP-Based Explanation Methods: A Review for NLP Interpretability (Mosca et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.406.pdf