Neural Machine Translation Training in a Multi-Domain Scenario

Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, Stephan Vogel


Abstract
In this paper, we explore alternative ways to train a neural machine translation system in a multi-domain scenario. We investigate data concatenation (with fine tuning), model stacking (multi-level fine tuning), data selection and multi-model ensemble. Our findings show that the best translation quality can be achieved by building an initial system on a concatenation of available out-of-domain data and then fine-tuning it on in-domain data. Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on. Data selection did not give the best results, but can be considered as a decent compromise between training time and translation quality. A weighted ensemble of different individual models performed better than data selection. It is beneficial in a scenario when there is no time for fine-tuning an already trained model.
Anthology ID:
2017.iwslt-1.10
Volume:
Proceedings of the 14th International Conference on Spoken Language Translation
Month:
December 14-15
Year:
2017
Address:
Tokyo, Japan
Editors:
Sakriani Sakti, Masao Utiyama
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
International Workshop on Spoken Language Translation
Note:
Pages:
66–73
Language:
URL:
https://aclanthology.org/2017.iwslt-1.10
DOI:
Bibkey:
Cite (ACL):
Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, and Stephan Vogel. 2017. Neural Machine Translation Training in a Multi-Domain Scenario. In Proceedings of the 14th International Conference on Spoken Language Translation, pages 66–73, Tokyo, Japan. International Workshop on Spoken Language Translation.
Cite (Informal):
Neural Machine Translation Training in a Multi-Domain Scenario (Sajjad et al., IWSLT 2017)
Copy Citation:
PDF:
https://aclanthology.org/2017.iwslt-1.10.pdf