Improving Lexical Choice in Neural Machine Translation

Toan Nguyen, David Chiang


Abstract
We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrase-based translation in nearly all settings.
Anthology ID:
N18-1031
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
334–343
Language:
URL:
https://aclanthology.org/N18-1031
DOI:
10.18653/v1/N18-1031
Bibkey:
Cite (ACL):
Toan Nguyen and David Chiang. 2018. Improving Lexical Choice in Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 334–343, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Improving Lexical Choice in Neural Machine Translation (Nguyen & Chiang, NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-1031.pdf
Video:
 http://vimeo.com/276445914
Code
 tnq177/improving_lexical_choice_in_nmt +  additional community code