OpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPU

Jean Senellart, Dakun Zhang, Bo Wang, Guillaume Klein, Jean-Pierre Ramatchandirin, Josep Crego, Alexander Rush


Abstract
We present a system description of the OpenNMT Neural Machine Translation entry for the WNMT 2018 evaluation. In this work, we developed a heavily optimized NMT inference model targeting a high-performance CPU system. The final system uses a combination of four techniques, all of them lead to significant speed-ups in combination: (a) sequence distillation, (b) architecture modifications, (c) precomputation, particularly of vocabulary, and (d) CPU targeted quantization. This work achieves the fastest performance of the shared task, and led to the development of new features that have been integrated to OpenNMT and available to the community.
Anthology ID:
W18-2715
Volume:
Proceedings of the 2nd Workshop on Neural Machine Translation and Generation
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Alexandra Birch, Andrew Finch, Thang Luong, Graham Neubig, Yusuke Oda
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
122–128
Language:
URL:
https://aclanthology.org/W18-2715
DOI:
10.18653/v1/W18-2715
Bibkey:
Cite (ACL):
Jean Senellart, Dakun Zhang, Bo Wang, Guillaume Klein, Jean-Pierre Ramatchandirin, Josep Crego, and Alexander Rush. 2018. OpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPU. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 122–128, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
OpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPU (Senellart et al., NGT 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-2715.pdf