2023
pdf
bib
abs
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language
Shantipriya Parida
|
Idris Abdulmumin
|
Shamsuddeen Hassan Muhammad
|
Aneesh Bose
|
Guneet Singh Kohli
|
Ibrahim Said Ahmad
|
Ketan Kotwal
|
Sayan Deb Sarkar
|
Ondřej Bojar
|
Habeebah Kakudi
Findings of the Association for Computational Linguistics: ACL 2023
This paper presents “HaVQA”, the first multimodal dataset for visual question answering (VQA) tasks in the Hausa language. The dataset was created by manually translating 6,022 English question-answer pairs, which are associated with 1,555 unique images from the Visual Genome dataset. As a result, the dataset provides 12,044 gold standard English-Hausa parallel sentences that were translated in a fashion that guarantees their semantic match with the corresponding visual information. We conducted several baseline experiments on the dataset, including visual question answering, visual question elicitation, text-only and multimodal machine translation.
2021
pdf
bib
abs
NLPHut’s Participation at WAT2021
Shantipriya Parida
|
Subhadarshi Panda
|
Ketan Kotwal
|
Amulya Ratna Dash
|
Satya Ranjan Dash
|
Yashvardhan Sharma
|
Petr Motlicek
|
Ondřej Bojar
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
This paper provides the description of shared tasks to the WAT 2021 by our team “NLPHut”. We have participated in the English→Hindi Multimodal translation task, English→Malayalam Multimodal translation task, and Indic Multi-lingual translation task. We have used the state-of-the-art Transformer model with language tags in different settings for the translation task and proposed a novel “region-specific” caption generation approach using a combination of image CNN and LSTM for the Hindi and Malayalam image captioning. Our submission tops in English→Malayalam Multimodal translation task (text-only translation, and Malayalam caption), and ranks second-best in English→Hindi Multimodal translation task (text-only translation, and Hindi caption). Our submissions have also performed well in the Indic Multilingual translation tasks.
pdf
bib
abs
Multimodal Neural Machine Translation System for English to Bengali
Shantipriya Parida
|
Subhadarshi Panda
|
Satya Prakash Biswal
|
Ketan Kotwal
|
Arghyadeep Sen
|
Satya Ranjan Dash
|
Petr Motlicek
Proceedings of the First Workshop on Multimodal Machine Translation for Low Resource Languages (MMTLRL 2021)
Multimodal Machine Translation (MMT) systems utilize additional information from other modalities beyond text to improve the quality of machine translation (MT). The additional modality is typically in the form of images. Despite proven advantages, it is indeed difficult to develop an MMT system for various languages primarily due to the lack of a suitable multimodal dataset. In this work, we develop an MMT for English-> Bengali using a recently published Bengali Visual Genome (BVG) dataset that contains images with associated bilingual textual descriptions. Through a comparative study of the developed MMT system vis-a-vis a Text-to-text translation, we demonstrate that the use of multimodal data not only improves the translation performance improvement in BLEU score of +1.3 on the development set, +3.9 on the evaluation test, and +0.9 on the challenge test set but also helps to resolve ambiguities in the pure text description. As per best of our knowledge, our English-Bengali MMT system is the first attempt in this direction, and thus, can act as a baseline for the subsequent research in MMT for low resource languages.