Multilingual Translation via Grafting Pre-trained Language Models

Zewei Sun, Mingxuan Wang, Lei Li


Abstract
Can pre-trained BERT for one language and GPT for another be glued together to translate texts? Self-supervised training using only monolingual data has led to the success of pre-trained (masked) language models in many NLP tasks. However, directly connecting BERT as an encoder and GPT as a decoder can be challenging in machine translation, for GPT-like models lack a cross-attention component that is needed in seq2seq decoders. In this paper, we propose Graformer to graft separately pre-trained (masked) language models for machine translation. With monolingual data for pre-training and parallel data for grafting training, we maximally take advantage of the usage of both types of data. Experiments on 60 directions show that our method achieves average improvements of 5.8 BLEU in x2en and 2.9 BLEU in en2x directions comparing with the multilingual Transformer of the same size.
Anthology ID:
2021.findings-emnlp.233
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2735–2747
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.233
DOI:
10.18653/v1/2021.findings-emnlp.233
Bibkey:
Cite (ACL):
Zewei Sun, Mingxuan Wang, and Lei Li. 2021. Multilingual Translation via Grafting Pre-trained Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2735–2747, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Multilingual Translation via Grafting Pre-trained Language Models (Sun et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.233.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.233.mp4
Code
 sunzewei2715/Graformer