TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation

Miriam Horovicz, Roni Goldshmidt


Abstract
As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model’s response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method’s ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems. Open Source code https://github.com/ronigold/TokenSHAP
Anthology ID:
2024.nlp4science-1.1
Volume:
Proceedings of the 1st Workshop on NLP for Science (NLP4Science)
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Lotem Peled-Cohen, Nitay Calderon, Shir Lissak, Roi Reichart
Venue:
NLP4Science
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–8
Language:
URL:
https://aclanthology.org/2024.nlp4science-1.1
DOI:
Bibkey:
Cite (ACL):
Miriam Horovicz and Roni Goldshmidt. 2024. TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation. In Proceedings of the 1st Workshop on NLP for Science (NLP4Science), pages 1–8, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation (Horovicz & Goldshmidt, NLP4Science 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nlp4science-1.1.pdf