SHAPE: Shifted Absolute Position Embedding for Transformers

Shun Kiyono, Sosuke Kobayashi, Jun Suzuki, Kentaro Inui


Abstract
Position representation is crucial for building position-aware representations in Transformers. Existing position representations suffer from a lack of generalization to test data with unseen lengths or high computational cost. We investigate shifted absolute position embedding (SHAPE) to address both issues. The basic idea of SHAPE is to achieve shift invariance, which is a key property of recent successful position representations, by randomly shifting absolute positions during training. We demonstrate that SHAPE is empirically comparable to its counterpart while being simpler and faster.
Anthology ID:
2021.emnlp-main.266
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3309–3321
Language:
URL:
https://aclanthology.org/2021.emnlp-main.266
DOI:
10.18653/v1/2021.emnlp-main.266
Bibkey:
Cite (ACL):
Shun Kiyono, Sosuke Kobayashi, Jun Suzuki, and Kentaro Inui. 2021. SHAPE: Shifted Absolute Position Embedding for Transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3309–3321, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
SHAPE: Shifted Absolute Position Embedding for Transformers (Kiyono et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.266.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.266.mp4