LLM Explainability via Attributive Masking Learning

Oren Barkan, Yonatan Toib, Yehonatan Elisha, Jonathan Weill, Noam Koenigstein


Abstract
In this paper, we introduce Attributive Masking Learning (AML), a method designed for explaining language model predictions by learning input masks. AML trains an attribution model to identify influential tokens in the input for a given language model’s prediction. The central concept of AML is to train an auxiliary attribution model to simultaneously 1) mask as much input data as possible while ensuring that the language model’s prediction closely aligns with its prediction on the original input, and 2) ensure a significant change in the model’s prediction when applying the inverse (complement) of the same mask to the input. This dual-masking approach further enables the optimization of the explanation w.r.t. the metric of interest. We demonstrate the effectiveness of AML on both encoder-based and decoder-based language models, showcasing its superiority over a variety of state-of-the-art explanation methods on multiple benchmarks.
Anthology ID:
2024.findings-emnlp.556
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9522–9537
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.556
DOI:
Bibkey:
Cite (ACL):
Oren Barkan, Yonatan Toib, Yehonatan Elisha, Jonathan Weill, and Noam Koenigstein. 2024. LLM Explainability via Attributive Masking Learning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 9522–9537, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
LLM Explainability via Attributive Masking Learning (Barkan et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.556.pdf