Don’t Buy it! Reassessing the Ad Understanding Abilities of Contrastive Multimodal Models

Anna Bavaresco, Alberto Testoni, Raquel Fernández


Abstract
Image-based advertisements are complex multimodal stimuli that often contain unusual visual elements and figurative language. Previous research on automatic ad understanding has reported impressive zero-shot accuracy of contrastive vision-and-language models (VLMs) on an ad-explanation retrieval task. Here, we examine the original task setup and show that contrastive VLMs can solve it by exploiting grounding heuristics. To control for this confound, we introduce TRADE, a new evaluation test set with adversarial grounded explanations. While these explanations look implausible to humans, we show that they “fool” four different contrastive VLMs. Our findings highlight the need for an improved operationalisation of automatic ad understanding that truly evaluates VLMs’ multimodal reasoning abilities. We make our code and TRADE available at https://github.com/dmg-illc/trade.
Anthology ID:
2024.acl-short.77
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
870–879
Language:
URL:
https://aclanthology.org/2024.acl-short.77
DOI:
Bibkey:
Cite (ACL):
Anna Bavaresco, Alberto Testoni, and Raquel Fernández. 2024. Don’t Buy it! Reassessing the Ad Understanding Abilities of Contrastive Multimodal Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 870–879, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Don’t Buy it! Reassessing the Ad Understanding Abilities of Contrastive Multimodal Models (Bavaresco et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-short.77.pdf