Jiquan Ngiam
2020
Learning a Multi-Domain Curriculum for Neural Machine Translation
Wei Wang
|
Ye Tian
|
Jiquan Ngiam
|
Yinfei Yang
|
Isaac Caswell
|
Zarana Parekh
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Most data selection research in machine translation focuses on improving a single domain. We perform data selection for multiple domains at once. This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-of-domain. In large-scale experiments, the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training.