Dynamic Position Encoding for Transformers

Joyce Zheng, Mehdi Rezagholizadeh, Peyman Passban


Abstract
Recurrent models have been dominating the field of neural machine translation (NMT) for the past few years. Transformers have radically changed it by proposing a novel architecture that relies on a feed-forward backbone and self-attention mechanism. Although Transformers are powerful, they could fail to properly encode sequential/positional information due to their non-recurrent nature. To solve this problem, position embeddings are defined exclusively for each time step to enrich word information. However, such embeddings are fixed after training regardless of the task and word ordering system of the source and target languages. In this paper, we address this shortcoming by proposing a novel architecture with new position embeddings that take the order of the target words into consideration. Instead of using predefined position embeddings, our solution generates new embeddings to refine each word’s position information. Since we do not dictate the position of the source tokens and we learn them in an end-to-end fashion, we refer to our method as dynamic position encoding (DPE). We evaluated the impact of our model on multiple datasets to translate from English to German, French, and Italian and observed meaningful improvements in comparison to the original Transformer.
Anthology ID:
2022.coling-1.450
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5076–5084
Language:
URL:
https://aclanthology.org/2022.coling-1.450
DOI:
Bibkey:
Cite (ACL):
Joyce Zheng, Mehdi Rezagholizadeh, and Peyman Passban. 2022. Dynamic Position Encoding for Transformers. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5076–5084, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Dynamic Position Encoding for Transformers (Zheng et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.450.pdf