Revisiting Low-Resource Neural Machine Translation: A Case Study

Rico Sennrich, Biao Zhang


Abstract
It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German–English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean–English dataset, surpassing previously reported results by 4 BLEU.
Anthology ID:
P19-1021
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
211–221
Language:
URL:
https://aclanthology.org/P19-1021
DOI:
10.18653/v1/P19-1021
Bibkey:
Cite (ACL):
Rico Sennrich and Biao Zhang. 2019. Revisiting Low-Resource Neural Machine Translation: A Case Study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 211–221, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Revisiting Low-Resource Neural Machine Translation: A Case Study (Sennrich & Zhang, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1021.pdf
Video:
 https://aclanthology.org/P19-1021.mp4
Code
 additional community code