Multimodal Quality Estimation for Machine Translation

Shu Okabe, Frédéric Blain, Lucia Specia


Abstract
We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE. We compare various multimodality integration and fusion strategies. For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality.
Anthology ID:
2020.acl-main.114
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1233–1240
Language:
URL:
https://aclanthology.org/2020.acl-main.114
DOI:
10.18653/v1/2020.acl-main.114
Bibkey:
Cite (ACL):
Shu Okabe, Frédéric Blain, and Lucia Specia. 2020. Multimodal Quality Estimation for Machine Translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1233–1240, Online. Association for Computational Linguistics.
Cite (Informal):
Multimodal Quality Estimation for Machine Translation (Okabe et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.114.pdf
Video:
 http://slideslive.com/38929452