Multi-split Reversible Transformers Can Enhance Neural Machine Translation

Yuekai Zhao, Shuchang Zhou, Zhihua Zhang


Abstract
Large-scale transformers have been shown the state-of-the-art on neural machine translation. However, training these increasingly wider and deeper models could be tremendously memory intensive. We reduce the memory burden by employing the idea of reversible networks that a layer’s input can be reconstructed from its output. We design three types of multi-split based reversible transformers. We also devise a corresponding backpropagation algorithm, which does not need to store activations for most layers. Furthermore, we present two fine-tuning techniques: splits shuffle and self ensemble, to boost translation accuracy. Specifically, our best models surpass the vanilla transformer by at least 1.4 BLEU points in three datasets. Our large-scale reversible models achieve 30.0 BLEU in WMT’14 En-De and 43.5 BLEU in WMT’14 En-Fr, beating several very strong baselines with less than half of the training memory.
Anthology ID:
2021.eacl-main.19
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
244–254
Language:
URL:
https://aclanthology.org/2021.eacl-main.19
DOI:
10.18653/v1/2021.eacl-main.19
Bibkey:
Cite (ACL):
Yuekai Zhao, Shuchang Zhou, and Zhihua Zhang. 2021. Multi-split Reversible Transformers Can Enhance Neural Machine Translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 244–254, Online. Association for Computational Linguistics.
Cite (Informal):
Multi-split Reversible Transformers Can Enhance Neural Machine Translation (Zhao et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.19.pdf