%0 Journal Article %T How to Dissect a Muppet: The Structure of Transformer Embedding Spaces %A Mickus, Timothee %A Paperno, Denis %A Constant, Mathieu %J Transactions of the Association for Computational Linguistics %D 2022 %V 10 %I MIT Press %C Cambridge, MA %F mickus-etal-2022-dissect %X Pretrained embeddings based on the Transformer architecture have taken the NLP community by storm. We show that they can mathematically be reframed as a sum of vector factors and showcase how to use this reframing to study the impact of each component. We provide evidence that multi-head attentions and feed-forwards are not equally useful in all downstream applications, as well as a quantitative overview of the effects of finetuning on the overall embedding space. This approach allows us to draw connections to a wide range of previous studies, from vector space anisotropy to attention weights. %R 10.1162/tacl_a_00501 %U https://aclanthology.org/2022.tacl-1.57 %U https://doi.org/10.1162/tacl_a_00501 %P 981-996