Multi-Source Neural Machine Translation with Data Augmentation

Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura


Abstract
Multi-source translation systems translate from multiple languages to a single target language. By using information from these multiple sources, these systems achieve large gains in accuracy. To train these systems, it is necessary to have corpora with parallel text in multiple sources and the target language. However, these corpora are rarely complete in practice due to the difficulty of providing human translations in all of the relevant languages. In this paper, we propose a data augmentation approach to fill such incomplete parts using multi-source neural machine translation (NMT). In our experiments, results varied over different language combinations but significant gains were observed when using a source language similar to the target language.
Anthology ID:
2018.iwslt-1.7
Volume:
Proceedings of the 15th International Conference on Spoken Language Translation
Month:
October 29-30
Year:
2018
Address:
Brussels
Editors:
Marco Turchi, Jan Niehues, Marcello Frederico
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
International Conference on Spoken Language Translation
Note:
Pages:
48–53
Language:
URL:
https://aclanthology.org/2018.iwslt-1.7
DOI:
Award:
 Best Student Paper
Bibkey:
Cite (ACL):
Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, and Satoshi Nakamura. 2018. Multi-Source Neural Machine Translation with Data Augmentation. In Proceedings of the 15th International Conference on Spoken Language Translation, pages 48–53, Brussels. International Conference on Spoken Language Translation.
Cite (Informal):
Multi-Source Neural Machine Translation with Data Augmentation (Nishimura et al., IWSLT 2018)
Copy Citation:
PDF:
https://aclanthology.org/2018.iwslt-1.7.pdf