Combining PBSMT and NMT Back-translated Data for Efficient NMT

Alberto Poncelas, Maja Popović, Dimitar Shterionov, Gideon Maillette de Buy Wenniger, Andy Way


Abstract
Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation, which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for back-translation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.
Anthology ID:
R19-1107
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
922–931
Language:
URL:
https://aclanthology.org/R19-1107
DOI:
10.26615/978-954-452-056-4_107
Bibkey:
Cite (ACL):
Alberto Poncelas, Maja Popović, Dimitar Shterionov, Gideon Maillette de Buy Wenniger, and Andy Way. 2019. Combining PBSMT and NMT Back-translated Data for Efficient NMT. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 922–931, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Combining PBSMT and NMT Back-translated Data for Efficient NMT (Poncelas et al., RANLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/R19-1107.pdf