Delving into the Openness of CLIP

Shuhuai Ren, Lei Li, Xuancheng Ren, Guangxiang Zhao, Xu Sun


Abstract
Contrastive Language-Image Pre-training (CLIP) formulates image classification as an image-to-text matching task, i.e., matching images to the corresponding natural language descriptions instead of discrete category IDs. This allows for open-vocabulary visual recognition, where the model can recognize images from an open class set (also known as an open vocabulary) in a zero-shot manner. However, evaluating the openness of CLIP-like models is challenging, as the models are open to arbitrary vocabulary in theory, but their accuracy varies in practice. To address this, we resort to an incremental perspective to assess the openness through vocabulary expansions, and define extensibility to measure a model’s ability to handle novel classes. Our evaluation shows that CLIP-like models are not truly open, and their performance deteriorates as the vocabulary expands. We further dissect the feature space of CLIP from the perspectives of representation alignment and uniformity. Our investigation reveals that the overestimation of openness is due to confusion among competing text features, rather than a failure to capture the similarity between image features and text features of novel classes. We hope that our investigation and analysis will facilitate future research on the CLIP openness issue.
Anthology ID:
2023.findings-acl.610
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9587–9606
Language:
URL:
https://aclanthology.org/2023.findings-acl.610
DOI:
10.18653/v1/2023.findings-acl.610
Bibkey:
Cite (ACL):
Shuhuai Ren, Lei Li, Xuancheng Ren, Guangxiang Zhao, and Xu Sun. 2023. Delving into the Openness of CLIP. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9587–9606, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Delving into the Openness of CLIP (Ren et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.610.pdf