VISIT: Visualizing and Interpreting the Semantic Information Flow of Transformers

Shahar Katz, Yonatan Belinkov


Abstract
Recent advances in interpretability suggest we can project weights and hidden states of transformer-based language models (LMs) to their vocabulary, a transformation that makes them more human interpretable. In this paper, we investigate LM attention heads and memory values, the vectors the models dynamically create and recall while processing a given input. By analyzing the tokens they represent through this projection, we identify patterns in the information flow inside the attention mechanism. Based on our discoveries, we create a tool to visualize a forward pass of Generative Pre-trained Transformers (GPTs) as an interactive flow graph, with nodes representing neurons or hidden states and edges representing the interactions between them. Our visualization simplifies huge amounts of data into easy-to-read plots that can reflect the models’ internal processing, uncovering the contribution of each component to the models’ final prediction. Our visualization also unveils new insights about the role of layer norms as semantic filters that influence the models’ output, and about neurons that are always activated during forward passes and act as regularization vectors.
Anthology ID:
2023.findings-emnlp.939
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14094–14113
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.939
DOI:
10.18653/v1/2023.findings-emnlp.939
Bibkey:
Cite (ACL):
Shahar Katz and Yonatan Belinkov. 2023. VISIT: Visualizing and Interpreting the Semantic Information Flow of Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14094–14113, Singapore. Association for Computational Linguistics.
Cite (Informal):
VISIT: Visualizing and Interpreting the Semantic Information Flow of Transformers (Katz & Belinkov, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.939.pdf