Improve Transformer Models with Better Relative Position Embeddings

Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang


Abstract
The transformer model has demonstrated superior results on NLP tasks including machine translation and question answering. In this paper, we argue that the position information is not fully utilized in existing work. For example, the initial proposal of a sinusoid embedding is fixed and not learnable. In this paper, we first review the absolute position embeddings and existing relative position embedding methods. We then propose new methods to encourage increased interaction between query, key and relative position embeddings in the self-attention mechanism. Our most promising approach is a generalization of the absolute position embedding. Our method results in increased accuracy compared to previous approaches in absolute and relative position embeddings on the SQuAD1.1 dataset. In addition, we address the inductive property of whether a position embedding can be robust enough to handle long sequences. We demonstrate empirically that our relative embedding method can be reasonably generalized to and is robust in the inductive perspective. Finally, we show that our proposed method can be effectively and efficiently adopted as a near drop-in replacement for improving the accuracy of large models with little computational overhead.
Anthology ID:
2020.findings-emnlp.298
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3327–3335
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.298
DOI:
10.18653/v1/2020.findings-emnlp.298
Bibkey:
Cite (ACL):
Zhiheng Huang, Davis Liang, Peng Xu, and Bing Xiang. 2020. Improve Transformer Models with Better Relative Position Embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3327–3335, Online. Association for Computational Linguistics.
Cite (Informal):
Improve Transformer Models with Better Relative Position Embeddings (Huang et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.298.pdf
Video:
 https://slideslive.com/38940108
Code
 huggingface/transformers
Data
SQuAD