Position Information Emerges in Causal Transformers Without Positional Encodings via Similarity of Nearby Embeddings

Chunsheng Zuo, Pavel Guerzhoy, Michael Guerzhoy


Abstract
Transformers with causal attention can solve tasks that require positional information without using positional encodings. In this work, we propose and investigate a new hypothesis about how positional information can be stored without using explicit positional encoding. We observe that nearby embeddings are more similar to each other than faraway embeddings, allowing the transformer to potentially reconstruct the positions of tokens. We show that this pattern can occur in both the trained and the randomly initialized Transformer models with causal attention and no positional encodings over a common range of hyperparameters.
Anthology ID:
2025.coling-main.632
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9418–9430
Language:
URL:
https://aclanthology.org/2025.coling-main.632/
DOI:
Bibkey:
Cite (ACL):
Chunsheng Zuo, Pavel Guerzhoy, and Michael Guerzhoy. 2025. Position Information Emerges in Causal Transformers Without Positional Encodings via Similarity of Nearby Embeddings. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9418–9430, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Position Information Emerges in Causal Transformers Without Positional Encodings via Similarity of Nearby Embeddings (Zuo et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.632.pdf