Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

Zeyu Yun, Yubei Chen, Bruno Olshausen, Yann LeCun


Abstract
Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these ‘black boxes’ as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at: https://github.com/zeyuyun1/TransformerVis.
Anthology ID:
2021.deelio-1.1
Volume:
Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
Month:
June
Year:
2021
Address:
Online
Editors:
Eneko Agirre, Marianna Apidianaki, Ivan Vulić
Venue:
DeeLIO
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/2021.deelio-1.1
DOI:
10.18653/v1/2021.deelio-1.1
Bibkey:
Cite (ACL):
Zeyu Yun, Yubei Chen, Bruno Olshausen, and Yann LeCun. 2021. Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 1–10, Online. Association for Computational Linguistics.
Cite (Informal):
Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors (Yun et al., DeeLIO 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.deelio-1.1.pdf
Optional supplementary data:
 2021.deelio-1.1.OptionalSupplementaryData.pdf
Code
 zeyuyun1/transformervis