Rapid Adaptation of Neural Machine Translation to New Languages

Graham Neubig, Junjie Hu


Abstract
This paper examines the problem of adapting neural machine translation systems to new, low-resourced languages (LRLs) as effectively and rapidly as possible. We propose methods based on starting with massively multilingual “seed models”, which can be trained ahead-of-time, and then continuing training on data related to the LRL. We contrast a number of strategies, leading to a novel, simple, yet effective method of “similar-language regularization”, where we jointly train on both a LRL of interest and a similar high-resourced language to prevent over-fitting to small LRL data. Experiments demonstrate that massively multilingual models, even without any explicit adaptation, are surprisingly effective, achieving BLEU scores of up to 15.5 with no data from the LRL, and that the proposed similar-language regularization method improves over other adaptation methods by 1.7 BLEU points average over 4 LRL settings.
Anthology ID:
D18-1103
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
875–880
Language:
URL:
https://aclanthology.org/D18-1103
DOI:
10.18653/v1/D18-1103
Bibkey:
Cite (ACL):
Graham Neubig and Junjie Hu. 2018. Rapid Adaptation of Neural Machine Translation to New Languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 875–880, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Rapid Adaptation of Neural Machine Translation to New Languages (Neubig & Hu, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1103.pdf
Video:
 https://aclanthology.org/D18-1103.mp4
Code
 neubig/rapid-adaptation