Ayush Tarun
2024
Chitranuvad: Adapting Multi-lingual LLMs for Multimodal Translation
Shaharukh Khan
|
Ayush Tarun
|
Ali Faraz
|
Palash Kamble
|
Vivek Dahiya
|
Praveen Pokala
|
Ashish Kulkarni
|
Chandra Khatri
|
Abhinav Ravi
|
Shubham Agarwal
Proceedings of the Ninth Conference on Machine Translation
In this work, we provide the system description of our submission as part of the English-to-Lowres Multimodal Translation Task at theWorkshop on Asian Translation (WAT2024). We introduce Chitranuvad, a multimodal model that effectively integrates Multilingual LLMand a vision module for Multimodal Translation. Our method uses a ViT image encoder to extract visual representations as visual tokenembeddings which are projected to the LLM space by an adapter layer and generates translation in an autoregressive fashion. We participated in all the three tracks (Image Captioning, Text-only and Multimodal translationtasks) for Indic languages (ie. English translation to Hindi, Bengali and Malyalam) and achieved SOTA results for Hindi in all of themon the Challenge set while remaining competitive for the other languages in the shared task.
Search
Co-authors
- Shaharukh Khan 1
- Ali Faraz 1
- Palash Kamble 1
- Vivek Dahiya 1
- Praveen Pokala 1
- show all...
Venues
- wmt1