%0 Conference Proceedings %T Neural Machine Translation Training in a Multi-Domain Scenario %A Sajjad, Hassan %A Durrani, Nadir %A Dalvi, Fahim %A Belinkov, Yonatan %A Vogel, Stephan %Y Sakti, Sakriani %Y Utiyama, Masao %S Proceedings of the 14th International Conference on Spoken Language Translation %D 2017 %8 dec 14 15 %I International Workshop on Spoken Language Translation %C Tokyo, Japan %F sajjad-etal-2017-neural %X In this paper, we explore alternative ways to train a neural machine translation system in a multi-domain scenario. We investigate data concatenation (with fine tuning), model stacking (multi-level fine tuning), data selection and multi-model ensemble. Our findings show that the best translation quality can be achieved by building an initial system on a concatenation of available out-of-domain data and then fine-tuning it on in-domain data. Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on. Data selection did not give the best results, but can be considered as a decent compromise between training time and translation quality. A weighted ensemble of different individual models performed better than data selection. It is beneficial in a scenario when there is no time for fine-tuning an already trained model. %U https://aclanthology.org/2017.iwslt-1.10 %P 66-73