Menekse Kuyu
2021
Cross-lingual Visual Pre-training for Multimodal Machine Translation
Ozan Caglayan
|
Menekse Kuyu
|
Mustafa Sercan Amac
|
Pranava Madhyastha
|
Erkut Erdem
|
Aykut Erdem
|
Lucia Specia
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Pre-trained language models have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation language modelling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.
Search
Co-authors
- Ozan Caglayan 1
- Mustafa Sercan Amac 1
- Pranava Swaroop Madhyastha 1
- Erkut Erdem 1
- Aykut Erdem 1
- show all...
Venues
- eacl1