Jiajun Shen
2021
The Source-Target Domain Mismatch Problem in Machine Translation
Jiajun Shen
|
Peng-Jen Chen
|
Matthew Le
|
Junxian He
|
Jiatao Gu
|
Myle Ott
|
Michael Auli
|
Marc’Aurelio Ranzato
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
While we live in an increasingly interconnected world, different places still exhibit strikingly different cultures and many events we experience in our every day life pertain only to the specific place we live in. As a result, people often talk about different things in different parts of the world. In this work we study the effect of local context in machine translation and postulate that this causes the domains of the source and target language to greatly mismatch. We first formalize the concept of source-target domain mismatch, propose a metric to quantify it, and provide empirical evidence for its existence. We conclude with an empirical study of how source-target domain mismatch affects training of machine translation systems on low resource languages. While this may severely affect back-translation, the degradation can be alleviated by combining back-translation with self-training and by increasing the amount of target side monolingual data.
2019
Facebook AI’s WAT19 Myanmar-English Translation Task Submission
Peng-Jen Chen
|
Jiajun Shen
|
Matthew Le
|
Vishrav Chaudhary
|
Ahmed El-Kishky
|
Guillaume Wenzek
|
Myle Ott
|
Marc’Aurelio Ranzato
Proceedings of the 6th Workshop on Asian Translation
This paper describes Facebook AI’s submission to the WAT 2019 Myanmar-English translation task. Our baseline systems are BPE-based transformer models. We explore methods to leverage monolingual data to improve generalization, including self-training, back-translation and their combination. We further improve results by using noisy channel re-ranking and ensembling. We demonstrate that these techniques can significantly improve not only a system trained with additional monolingual data, but even the baseline system trained exclusively on the provided small parallel dataset. Our system ranks first in both directions according to human evaluation and BLEU, with a gain of over 8 BLEU points above the second best system.
Search
Co-authors
- Ahmed El-Kishky 1
- Guillaume Wenzek 1
- Jiatao Gu 1
- Junxian He 1
- Marc’Aurelio Ranzato 2
- show all...