Sequential Integrated Gradients: a simple but effective method for explaining language models

Joseph Enguehard


Abstract
Several explanation methods such as Integrated Gradients (IG) can be characterised as path-based methods, as they rely on a straight line between the data and an uninformative baseline. However, when applied to language models, these methods produce a path for each word of a sentence simultaneously, which could lead to creating sentences from interpolated words either having no clear meaning, or having a significantly different meaning compared to the original sentence. In order to keep the meaning of these sentences as close as possible to the original one, we propose Sequential Integrated Gradients (SIG), which computes the importance of each word in a sentence by keeping fixed every other words, only creating interpolations between the baseline and the word of interest. Moreover, inspired by the training procedure of language models, we also propose to replace the baseline token “pad” with the trained token “mask”. While being a simple improvement over the original IG method, we show on various models and datasets that SIG proves to be a very effective method for explaining language models.
Anthology ID:
2023.findings-acl.477
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7555–7565
Language:
URL:
https://aclanthology.org/2023.findings-acl.477
DOI:
10.18653/v1/2023.findings-acl.477
Bibkey:
Cite (ACL):
Joseph Enguehard. 2023. Sequential Integrated Gradients: a simple but effective method for explaining language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7555–7565, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Sequential Integrated Gradients: a simple but effective method for explaining language models (Enguehard, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.477.pdf