LTRC-IIITH at MEDIQA-M3G 2024: Medical Visual Question Answering with Vision-Language Models

Jerrin Thomas, Sushvin Marimuthu, Parameswari Krishnamurthy


Abstract
In this paper, we present our work to the MEDIQA-M3G 2024 shared task, which tackles multilingual and multimodal medical answer generation. Our system consists of a lightweight Vision-and-Language Transformer (ViLT) model which is fine-tuned for the clinical dermatology visual question-answering task. In the official leaderboard for the task, our system ranks 6th. After the challenge, we experiment with training the ViLT model on more data. We also explore the capabilities of large Vision-Language Models (VLMs) such as Gemini and LLaVA.
Anthology ID:
2024.clinicalnlp-1.67
Volume:
Proceedings of the 6th Clinical Natural Language Processing Workshop
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Danielle Bitterman
Venues:
ClinicalNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
703–707
Language:
URL:
https://aclanthology.org/2024.clinicalnlp-1.67
DOI:
10.18653/v1/2024.clinicalnlp-1.67
Bibkey:
Cite (ACL):
Jerrin Thomas, Sushvin Marimuthu, and Parameswari Krishnamurthy. 2024. LTRC-IIITH at MEDIQA-M3G 2024: Medical Visual Question Answering with Vision-Language Models. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 703–707, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
LTRC-IIITH at MEDIQA-M3G 2024: Medical Visual Question Answering with Vision-Language Models (Thomas et al., ClinicalNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clinicalnlp-1.67.pdf