Babel-ImageNet: Massively Multilingual Evaluation of Vision-and-Language Representations

Gregor Geigle, Radu Timofte, Goran Glavaš


Abstract
Vision-and-language (VL) models with separate encoders for each modality (e.g., CLIP) have become the go-to models for zero-shot image classification and image-text retrieval. They are, however, mostly evaluated in English as multilingual benchmarks are limited in availability. We introduce Babel-ImageNet, a massively multilingual benchmark that offers (partial) translations of ImageNet labels to 100 languages, built without machine translation or manual annotation. We instead automatically obtain reliable translations by linking them – via shared WordNet synsets – to BabelNet, a massively multilingual lexico-semantic network. We evaluate 11 public multilingual CLIP models on zero-shot image classification (ZS-IC) on our benchmark, demonstrating a significant gap between English ImageNet performance and that of high-resource languages (e.g., German or Chinese), and an even bigger gap for low-resource languages (e.g., Sinhala or Lao). Crucially, we show that the models’ ZS-IC performance highly correlates with their performance in image-text retrieval, validating the use of Babel-imageNet to evaluate multilingual models for the vast majority of languages without gold image-text data. Finally, we show that the performance of multilingual CLIP can be drastically improved for low-resource languages with parameter-efficient language-specific training. We make our code and data publicly available: https://github.com/gregor-ge/Babel-ImageNet
Anthology ID:
2024.acl-long.277
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5064–5084
Language:
URL:
https://aclanthology.org/2024.acl-long.277
DOI:
10.18653/v1/2024.acl-long.277
Bibkey:
Cite (ACL):
Gregor Geigle, Radu Timofte, and Goran Glavaš. 2024. Babel-ImageNet: Massively Multilingual Evaluation of Vision-and-Language Representations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5064–5084, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Babel-ImageNet: Massively Multilingual Evaluation of Vision-and-Language Representations (Geigle et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.277.pdf