Efficient Inference For Neural Machine Translation

Yi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, Ilya Chatsviorkin


Abstract
Large Transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field. In this work, we look for the optimal combination of known techniques to optimize inference speed without sacrificing translation quality. We conduct an empirical study that stacks various approaches and demonstrates that combination of replacing decoder self-attention with simplified recurrent units, adopting a deep encoder and a shallow decoder architecture and multi-head attention pruning can achieve up to 109% and 84% speedup on CPU and GPU respectively and reduce the number of parameters by 25% while maintaining the same translation quality in terms of BLEU.
Anthology ID:
2020.sustainlp-1.7
Volume:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–53
Language:
URL:
https://aclanthology.org/2020.sustainlp-1.7
DOI:
10.18653/v1/2020.sustainlp-1.7
Bibkey:
Cite (ACL):
Yi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, and Ilya Chatsviorkin. 2020. Efficient Inference For Neural Machine Translation. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 48–53, Online. Association for Computational Linguistics.
Cite (Informal):
Efficient Inference For Neural Machine Translation (Hsu et al., sustainlp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sustainlp-1.7.pdf
Video:
 https://slideslive.com/38939429