%0 Conference Proceedings %T Multi-split Reversible Transformers Can Enhance Neural Machine Translation %A Zhao, Yuekai %A Zhou, Shuchang %A Zhang, Zhihua %Y Merlo, Paola %Y Tiedemann, Jorg %Y Tsarfaty, Reut %S Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume %D 2021 %8 April %I Association for Computational Linguistics %C Online %F zhao-etal-2021-multi %X Large-scale transformers have been shown the state-of-the-art on neural machine translation. However, training these increasingly wider and deeper models could be tremendously memory intensive. We reduce the memory burden by employing the idea of reversible networks that a layer’s input can be reconstructed from its output. We design three types of multi-split based reversible transformers. We also devise a corresponding backpropagation algorithm, which does not need to store activations for most layers. Furthermore, we present two fine-tuning techniques: splits shuffle and self ensemble, to boost translation accuracy. Specifically, our best models surpass the vanilla transformer by at least 1.4 BLEU points in three datasets. Our large-scale reversible models achieve 30.0 BLEU in WMT’14 En-De and 43.5 BLEU in WMT’14 En-Fr, beating several very strong baselines with less than half of the training memory. %R 10.18653/v1/2021.eacl-main.19 %U https://aclanthology.org/2021.eacl-main.19 %U https://doi.org/10.18653/v1/2021.eacl-main.19 %P 244-254