Selecting Artificially-Generated Sentences for Fine-Tuning Neural Machine Translation

Alberto Poncelas, Andy Way


Abstract
Neural Machine Translation (NMT) models tend to achieve the best performances when larger sets of parallel sentences are provided for training. For this reason, augmenting the training set with artificially-generated sentence pair can boost the performance. Nonetheless, the performance can also be improved with a small number of sentences if they are in the same domain as the test set. Accordingly, we want to explore the use of artificially-generated sentence along with data-selection algorithms to improve NMT models trained solely with authentic data. In this work, we show how artificially-generated sentences can be more beneficial than authentic pairs and what are their advantages when used in combination with data-selection algorithms.
Anthology ID:
W19-8629
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
219–228
Language:
URL:
https://aclanthology.org/W19-8629
DOI:
10.18653/v1/W19-8629
Bibkey:
Cite (ACL):
Alberto Poncelas and Andy Way. 2019. Selecting Artificially-Generated Sentences for Fine-Tuning Neural Machine Translation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 219–228, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Selecting Artificially-Generated Sentences for Fine-Tuning Neural Machine Translation (Poncelas & Way, INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8629.pdf
Data
WMT 2015