Multimodal Robustness for Neural Machine Translation

Yuting Zhao, Ioan Calapodescu


Abstract
In this paper, we look at the case of a Generic text-to-text NMT model that has to deal with data coming from various modalities, like speech, images, or noisy text extracted from the web. We propose a two-step method, based on composable adapters, to deal with this problem of Multimodal Robustness. In a first step, we separately learn domain adapters and modality specific adapters, to deal with noisy input coming from various sources: ASR, OCR, or noisy text (UGC). In a second step, we combine these components at runtime via dynamic routing or, when the source of noise is unknown, via two new transfer learning mechanisms (Fast Fusion and Multi Fusion). We show that our method provides a flexible, state-of-the-art, architecture able to deal with noisy multimodal inputs.
Anthology ID:
2022.emnlp-main.582
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8505–8516
Language:
URL:
https://aclanthology.org/2022.emnlp-main.582
DOI:
10.18653/v1/2022.emnlp-main.582
Bibkey:
Cite (ACL):
Yuting Zhao and Ioan Calapodescu. 2022. Multimodal Robustness for Neural Machine Translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8505–8516, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Multimodal Robustness for Neural Machine Translation (Zhao & Calapodescu, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.582.pdf