Exploring Recombination for Efficient Decoding of Neural Machine Translation

Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, Hai Zhao


Abstract
In Neural Machine Translation (NMT), the decoder can capture the features of the entire prediction history with neural connections and representations. This means that partial hypotheses with different prefixes will be regarded differently no matter how similar they are. However, this might be inefficient since some partial hypotheses can contain only local differences that will not influence future predictions. In this work, we introduce recombination in NMT decoding based on the concept of the “equivalence” of partial hypotheses. Heuristically, we use a simple n-gram suffix based equivalence function and adapt it into beam search decoding. Through experiments on large-scale Chinese-to-English and English-to-Germen translation tasks, we show that the proposed method can obtain similar translation quality with a smaller beam size, making NMT decoding more efficient.
Anthology ID:
D18-1511
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4785–4790
Language:
URL:
https://aclanthology.org/D18-1511
DOI:
10.18653/v1/D18-1511
Bibkey:
Cite (ACL):
Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2018. Exploring Recombination for Efficient Decoding of Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4785–4790, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Exploring Recombination for Efficient Decoding of Neural Machine Translation (Zhang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1511.pdf
Attachment:
 D18-1511.Attachment.zip
Video:
 https://vimeo.com/306168250
Code
 zzsfornlp/znmt-merge