Imitation Learning for Non-Autoregressive Neural Machine Translation

Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun


Abstract
Non-autoregressive translation models (NAT) have achieved impressive inference speedup. A potential issue of the existing NAT algorithms, however, is that the decoding is conducted in parallel, without directly considering previous context. In this paper, we propose an imitation learning framework for non-autoregressive machine translation, which still enjoys the fast translation speed but gives comparable translation performance compared to its auto-regressive counterpart. We conduct experiments on the IWSLT16, WMT14 and WMT16 datasets. Our proposed model achieves a significant speedup over the autoregressive models, while keeping the translation quality comparable to the autoregressive models. By sampling sentence length in parallel at inference time, we achieve the performance of 31.85 BLEU on WMT16 RoEn and 30.68 BLEU on IWSLT16 EnDe.
Anthology ID:
P19-1125
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1304–1312
Language:
URL:
https://aclanthology.org/P19-1125
DOI:
10.18653/v1/P19-1125
Bibkey:
Cite (ACL):
Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation Learning for Non-Autoregressive Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1304–1312, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Imitation Learning for Non-Autoregressive Neural Machine Translation (Wei et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1125.pdf