DCU ADAPT at WMT24: English to Low-resource Multi-Modal Translation Task

Sami Haq, Rudali Huidrom, Sheila Castilho


Abstract
This paper presents the system description of “DCU_NMT’s” submission to the WMT-WAT24 English-to-Low-Resource Multimodal Translation Task. We participated in the English-to-Hindi track, developing both text-only and multimodal neural machine translation (NMT) systems. The text-only systems were trained from scratch on constrained data and augmented with back-translated data. For the multimodal approach, we implemented a context-aware transformer model that integrates visual features as additional contextual information. Specifically, image descriptions generated by an image captioning model were encoded using BERT and concatenated with the textual input.The results indicate that our multimodal system, trained solely on limited data, showed improvements over the text-only baseline in both the challenge and evaluation sets, suggesting the potential benefits of incorporating visual information.
Anthology ID:
2024.wmt-1.75
Volume:
Proceedings of the Ninth Conference on Machine Translation
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
810–814
Language:
URL:
https://aclanthology.org/2024.wmt-1.75
DOI:
Bibkey:
Cite (ACL):
Sami Haq, Rudali Huidrom, and Sheila Castilho. 2024. DCU ADAPT at WMT24: English to Low-resource Multi-Modal Translation Task. In Proceedings of the Ninth Conference on Machine Translation, pages 810–814, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
DCU ADAPT at WMT24: English to Low-resource Multi-Modal Translation Task (Haq et al., WMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wmt-1.75.pdf