How and where does CLIP process negation?

Vincent Quantmeyer, Pablo Mosteiro, Albert Gatt


Abstract
Various benchmarks have been proposed to test linguistic understanding in pre-trained vision & language (VL) models. Here we build on the existence task from the VALSE benchmark (Parcalabescu et al., 2022) which we use to test models’ understanding of negation, a particularly interesting issue for multimodal models. However, while such VL benchmarks are useful for measuring model performance, they do not reveal anything about the internal processes through which these models arrive at their outputs in such visio-linguistic tasks. We take inspiration from the growing literature on model interpretability to explain the behaviour of VL models on the understanding of negation. Specifically, we approach these questions through an in-depth analysis of the text encoder in CLIP (Radford et al., 2021), a highly influential VL model. We localise parts of the encoder that process negation and analyse the role of attention heads in this task. Our contributions are threefold. We demonstrate how methods from the language model interpretability literature (e.g., causal tracing) can be translated to multimodal models and tasks; we provide concrete insights into how CLIP processes negation on the VALSE existence task; and we highlight inherent limitations in the VALSE dataset as a benchmark for linguistic understanding.
Anthology ID:
2024.alvr-1.5
Volume:
Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Jing Gu, Tsu-Jui (Ray) Fu, Drew Hudson, Asli Celikyilmaz, William Wang
Venues:
ALVR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
59–72
Language:
URL:
https://aclanthology.org/2024.alvr-1.5
DOI:
Bibkey:
Cite (ACL):
Vincent Quantmeyer, Pablo Mosteiro, and Albert Gatt. 2024. How and where does CLIP process negation?. In Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR), pages 59–72, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
How and where does CLIP process negation? (Quantmeyer et al., ALVR-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.alvr-1.5.pdf