On the use of BERT for Neural Machine Translation

Stephane Clinchant, Kweon Woo Jung, Vassilina Nikoulina


Abstract
Exploiting large pretrained models for various NMT tasks have gained a lot of visibility recently. In this work we study how BERT pretrained models could be exploited for supervised Neural Machine Translation. We compare various ways to integrate pretrained BERT model with NMT model and study the impact of the monolingual data used for BERT training on the final translation quality. We use WMT-14 English-German, IWSLT15 English-German and IWSLT14 English-Russian datasets for these experiments. In addition to standard task test set evaluation, we perform evaluation on out-of-domain test sets and noise injected test sets, in order to assess how BERT pretrained representations affect model robustness.
Anthology ID:
D19-5611
Volume:
Proceedings of the 3rd Workshop on Neural Generation and Translation
Month:
November
Year:
2019
Address:
Hong Kong
Editors:
Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, Katsuhito Sudoh
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
108–117
Language:
URL:
https://aclanthology.org/D19-5611
DOI:
10.18653/v1/D19-5611
Bibkey:
Cite (ACL):
Stephane Clinchant, Kweon Woo Jung, and Vassilina Nikoulina. 2019. On the use of BERT for Neural Machine Translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 108–117, Hong Kong. Association for Computational Linguistics.
Cite (Informal):
On the use of BERT for Neural Machine Translation (Clinchant et al., NGT 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5611.pdf