Chitranuvad: Adapting Multi-lingual LLMs for Multimodal Translation

Shaharukh Khan, Ayush Tarun, Ali Faraz, Palash Kamble, Vivek Dahiya, Praveen Pokala, Ashish Kulkarni, Chandra Khatri, Abhinav Ravi, Shubham Agarwal


Abstract
In this work, we provide the system description of our submission as part of the English-to-Lowres Multimodal Translation Task at theWorkshop on Asian Translation (WAT2024). We introduce Chitranuvad, a multimodal model that effectively integrates Multilingual LLMand a vision module for Multimodal Translation. Our method uses a ViT image encoder to extract visual representations as visual tokenembeddings which are projected to the LLM space by an adapter layer and generates translation in an autoregressive fashion. We participated in all the three tracks (Image Captioning, Text-only and Multimodal translationtasks) for Indic languages (ie. English translation to Hindi, Bengali and Malyalam) and achieved SOTA results for Hindi in all of themon the Challenge set while remaining competitive for the other languages in the shared task.
Anthology ID:
2024.wmt-1.80
Volume:
Proceedings of the Ninth Conference on Machine Translation
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
839–851
Language:
URL:
https://aclanthology.org/2024.wmt-1.80
DOI:
Bibkey:
Cite (ACL):
Shaharukh Khan, Ayush Tarun, Ali Faraz, Palash Kamble, Vivek Dahiya, Praveen Pokala, Ashish Kulkarni, Chandra Khatri, Abhinav Ravi, and Shubham Agarwal. 2024. Chitranuvad: Adapting Multi-lingual LLMs for Multimodal Translation. In Proceedings of the Ninth Conference on Machine Translation, pages 839–851, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Chitranuvad: Adapting Multi-lingual LLMs for Multimodal Translation (Khan et al., WMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wmt-1.80.pdf