WISMIR3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches

Florian Schneider, Chris Biemann


Abstract
This paper presents WISMIR3, a multi-modal dataset comprising roughly 300K text-image pairs from Wikipedia. With a sophisticated automatic ETL pipeline, we scraped, filtered, and transformed the data so that WISMIR3 intrinsically differs from other popular text-image datasets like COCO and Flickr30k. We prove this difference by comparing various linguistic statistics between the three datasets computed using the pipeline. The primary purpose of WISMIR3 is to use it as a benchmark to challenge state-of-the-art text-image retrieval approaches, which already reach around 90% Recall@5 scores on the mentioned popular datasets. Therefore, we ran several text-image retrieval experiments on our dataset using current models, which show that the models, in fact, perform significantly worse compared to evaluation results on COCO and Flickr30k. In addition, for each text-image pair, we release features computed by Faster-R-CNN and CLIP models. With this, we want to ease and motivate the use of the dataset for other researchers.
Anthology ID:
2024.alvr-1.1
Volume:
Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Jing Gu, Tsu-Jui (Ray) Fu, Drew Hudson, Asli Celikyilmaz, William Wang
Venues:
ALVR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–6
Language:
URL:
https://aclanthology.org/2024.alvr-1.1
DOI:
Bibkey:
Cite (ACL):
Florian Schneider and Chris Biemann. 2024. WISMIR3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches. In Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR), pages 1–6, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
WISMIR3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches (Schneider & Biemann, ALVR-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.alvr-1.1.pdf