Learning Curricula for Multilingual Neural Machine Translation Training

Gaurav Kumar, Philipp Koehn, Sanjeev Khudanpur


Abstract
Low-resource Multilingual Neural Machine Translation (MNMT) is typically tasked with improving the translation performance on one or more language pairs with the aid of high-resource language pairs. In this paper and we propose two simple search based curricula – orderings of the multilingual training data – which help improve translation performance in conjunction with existing techniques such as fine-tuning. Additionally and we attempt to learn a curriculum for MNMT from scratch jointly with the training of the translation system using contextual multi-arm bandits. We show on the FLORES low-resource translation dataset that these learned curricula can provide better starting points for fine tuning and improve overall performance of the translation system.
Anthology ID:
2021.mtsummit-research.1
Volume:
Proceedings of Machine Translation Summit XVIII: Research Track
Month:
August
Year:
2021
Address:
Virtual
Editors:
Kevin Duh, Francisco Guzmán
Venue:
MTSummit
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
1–9
Language:
URL:
https://aclanthology.org/2021.mtsummit-research.1
DOI:
Bibkey:
Cite (ACL):
Gaurav Kumar, Philipp Koehn, and Sanjeev Khudanpur. 2021. Learning Curricula for Multilingual Neural Machine Translation Training. In Proceedings of Machine Translation Summit XVIII: Research Track, pages 1–9, Virtual. Association for Machine Translation in the Americas.
Cite (Informal):
Learning Curricula for Multilingual Neural Machine Translation Training (Kumar et al., MTSummit 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mtsummit-research.1.pdf
Data
FLoRes