Detecting concrete visual tokens for Multimodal Machine Translation

Braeden Bowen, Vipin Vijayan, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup


Abstract
The challenge of visual grounding and masking in multimodal machine translation (MMT) systems has encouraged varying approaches to the detection and selection of visually-grounded text tokens for masking. We introduce new methods for detection of visually and contextually relevant (concrete) tokens from source sentences, including detection with natural language processing (NLP), detection with object detection, and a joint detection-verification technique. We also introduce new methods for selection of detected tokens, including shortest n tokens, longest n tokens, and all detected concrete tokens. We utilize the GRAM MMT architecture to train models against synthetically collated multimodal datasets of source images with masked sentences, showing performance improvements and improved usage of visual context during translation tasks over the baseline model.
Anthology ID:
2024.amta-research.4
Volume:
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)
Month:
September
Year:
2024
Address:
Chicago, USA
Editors:
Rebecca Knowles, Akiko Eriguchi, Shivali Goel
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
29–38
Language:
URL:
https://aclanthology.org/2024.amta-research.4
DOI:
Bibkey:
Cite (ACL):
Braeden Bowen, Vipin Vijayan, Scott Grigsby, Timothy Anderson, and Jeremy Gwinnup. 2024. Detecting concrete visual tokens for Multimodal Machine Translation. In Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 29–38, Chicago, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Detecting concrete visual tokens for Multimodal Machine Translation (Bowen et al., AMTA 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.amta-research.4.pdf