Matvey Mikhalchuk
2024
The Shape of Learning: Anisotropy and Intrinsic Dimensions in Transformer-Based Models
Anton Razzhigaev
|
Matvey Mikhalchuk
|
Elizaveta Goncharova
|
Ivan Oseledets
|
Denis Dimitrov
|
Andrey Kuznetsov
Findings of the Association for Computational Linguistics: EACL 2024
In this study, we present an investigation into the anisotropy dynamics and intrinsic dimension of embeddings in transformer architectures, focusing on the dichotomy between encoders and decoders. Our findings reveal that the anisotropy profile in transformer decoders exhibits a distinct bell-shaped curve, with the highest anisotropy concentrations in the middle layers. This pattern diverges from the more uniformly distributed anisotropy observed in encoders. In addition, we found that the intrinsic dimension of embeddings increases in the initial phases of training, indicating an expansion into higher-dimensional space. This fact is then followed by a compression phase towards the end of training with dimensionality decrease, suggesting a refinement into more compact representations. Our results provide fresh insights to the understanding of encoders and decoders embedding properties.
Your Transformer is Secretly Linear
Anton Razzhigaev
|
Matvey Mikhalchuk
|
Elizaveta Goncharova
|
Nikolai Gerasimenko
|
Ivan Oseledets
|
Denis Dimitrov
|
Andrey Kuznetsov
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
This paper reveals a novel linear characteristic exclusive to transformer decoders, including models like GPT, LLaMA, OPT, BLOOM and others. We analyze embedding transformations between sequential layers, uncovering an almost perfect linear relationship (Procrustes similarity score of 0.99). However, linearity decreases when the residual component is removed, due to a consistently low transformer layer output norm. Our experiments show that pruning or linearly approximating some of the layers does not impact loss or model performance significantly. Moreover, we introduce a cosine-similarity-based regularization in our pretraining experiments on smaller models, aimed at reducing layer linearity. This regularization not only improves performance metrics on benchmarks like Tiny Stories and SuperGLUE but as well successfully decreases the linearity of the models. This study challenges the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed.
Search
Co-authors
- Anton Razzhigaev 2
- Elizaveta Goncharova 2
- Ivan Oseledets 2
- Denis Dimitrov 2
- Andrey Kuznetsov 2
- show all...