Efficient and High-Quality Neural Machine Translation with OpenNMT

Guillaume Klein, Dakun Zhang, Clément Chouteau, Josep Crego, Jean Senellart


Abstract
This paper describes the OpenNMT submissions to the WNGT 2020 efficiency shared task. We explore training and acceleration of Transformer models with various sizes that are trained in a teacher-student setup. We also present a custom and optimized C++ inference engine that enables fast CPU and GPU decoding with few dependencies. By combining additional optimizations and parallelization techniques, we create small, efficient, and high-quality neural machine translation models.
Anthology ID:
2020.ngt-1.25
Volume:
Proceedings of the Fourth Workshop on Neural Generation and Translation
Month:
July
Year:
2020
Address:
Online
Editors:
Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Kenneth Heafield, Marcin Junczys-Dowmunt, Ioannis Konstas, Xian Li, Graham Neubig, Yusuke Oda
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
211–217
Language:
URL:
https://aclanthology.org/2020.ngt-1.25
DOI:
10.18653/v1/2020.ngt-1.25
Bibkey:
Cite (ACL):
Guillaume Klein, Dakun Zhang, Clément Chouteau, Josep Crego, and Jean Senellart. 2020. Efficient and High-Quality Neural Machine Translation with OpenNMT. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 211–217, Online. Association for Computational Linguistics.
Cite (Informal):
Efficient and High-Quality Neural Machine Translation with OpenNMT (Klein et al., NGT 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.ngt-1.25.pdf
Video:
 http://slideslive.com/38929839