Atahan Ozdemir
2026
Geometric Interpretation of Layer Normalization and a Comparative Analysis with RMSNorm
Akshat Gupta | Atahan Ozdemir | Caoqinwei Gong | Gopala Anumanchipalli
Findings of the Association for Computational Linguistics: EACL 2026
Akshat Gupta | Atahan Ozdemir | Caoqinwei Gong | Gopala Anumanchipalli
Findings of the Association for Computational Linguistics: EACL 2026
This paper presents a novel geometric interpretation of LayerNorm and explores how LayerNorm influences the norm and orientation of hidden vectors in the representation space. We show that the definition of LayerNorm is innately linked to the uniform vector, defined as . We then show that the standardization step in LayerNorm can be understood in three simple steps: (i) remove the component of a vector along the uniform vector, (ii) normalize the remaining vector, and (iii) scale the resultant vector by, where is the dimensionality of the representation space. Finally, we compare the hidden representations of LayerNorm-based LLMs with models trained using RMSNorm and show that all LLMs naturally operate orthogonal to the uniform vector both during training and inference, that is, on average they do not have a component along the uniform vector during training or inference. This presents the first mechanistic evidence that removing the component along the uniform vector in LayerNorm is a redundant step. These results advocate for using RMSNorm over LayerNorm which is also more computationally efficient.