Aneesh Bose
2025
OVQA: A Dataset for Visual Question Answering and Multimodal Research in Odia Language
Shantipriya Parida
|
Shashikanta Sahoo
|
Sambit Sekhar
|
Kalyanamalini Sahoo
|
Ketan Kotwal
|
Sonal Khosla
|
Satya Ranjan Dash
|
Aneesh Bose
|
Guneet Singh Kohli
|
Smruti Smita Lenka
|
Ondřej Bojar
Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages
This paper introduces OVQA, the first multimodal dataset designed for visual question-answering (VQA), visual question elicitation (VQE), and multimodal research for the low-resource Odia language. The dataset was created by manually translating 6,149 English question-answer pairs, each associated with 6,149 unique images from the Visual Genome dataset. This effort resulted in 27,809 English-Odia parallel sentences, ensuring a semantic match with the corresponding visual information. Several baseline experiments were conducted on the dataset, including visual question answering and visual question elicitation. The dataset is the first VQA dataset for the low-resource Odia language and will be released for multimodal research purposes and also help researchers extend for other low-resource languages.
2023
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language
Shantipriya Parida
|
Idris Abdulmumin
|
Shamsuddeen Hassan Muhammad
|
Aneesh Bose
|
Guneet Singh Kohli
|
Ibrahim Said Ahmad
|
Ketan Kotwal
|
Sayan Deb Sarkar
|
Ondřej Bojar
|
Habeebah Kakudi
Findings of the Association for Computational Linguistics: ACL 2023
This paper presents “HaVQA”, the first multimodal dataset for visual question answering (VQA) tasks in the Hausa language. The dataset was created by manually translating 6,022 English question-answer pairs, which are associated with 1,555 unique images from the Visual Genome dataset. As a result, the dataset provides 12,044 gold standard English-Hausa parallel sentences that were translated in a fashion that guarantees their semantic match with the corresponding visual information. We conducted several baseline experiments on the dataset, including visual question answering, visual question elicitation, text-only and multimodal machine translation.
Search
Fix data
Co-authors
- Ondřej Bojar 2
- Guneet Singh Kohli 2
- Ketan Kotwal 2
- Shantipriya Parida 2
- Idris Abdulmumin 1
- show all...