If I feel smart, I will do the right thing: Combining Complementary Multimodal Information in Visual Language Models

Yuyu Bai, Sandro Pezzelle


Abstract
Generative visual language models (VLMs) have recently shown potential across various downstream language-and-vision tasks. At the same time, it is still an open question whether, and to what extent, these models can properly understand a multimodal context where language and vision provide complementary information—a mechanism routinely in place in human language communication. In this work, we test various VLMs on the task of generating action descriptions consistent with both an image’s visual content and an intention or attitude (not visually grounded) conveyed by a textual prompt. Our results show that BLIP-2 is not far from human performance when the task is framed as a generative multiple-choice problem, while other models struggle. Furthermore, the actions generated by BLIP-2 in an open-ended generative setting are better than those by the competitors; indeed, human annotators judge most of them as plausible continuations for the multimodal context. Our study reveals substantial variability among VLMs in integrating complementary multimodal information, yet BLIP-2 demonstrates promising trends across most evaluations, paving the way for seamless human-computer interaction.
Anthology ID:
2025.evalmg-1.3
Volume:
Proceedings of the First Workshop of Evaluation of Multi-Modal Generation
Month:
Jan
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Wei Emma Zhang, Xiang Dai, Desmond Elliot, Byron Fang, Mongyuan Sim, Haojie Zhuang, Weitong Chen
Venues:
EvalMG | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24–39
Language:
URL:
https://aclanthology.org/2025.evalmg-1.3/
DOI:
Bibkey:
Cite (ACL):
Yuyu Bai and Sandro Pezzelle. 2025. If I feel smart, I will do the right thing: Combining Complementary Multimodal Information in Visual Language Models. In Proceedings of the First Workshop of Evaluation of Multi-Modal Generation, pages 24–39, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
If I feel smart, I will do the right thing: Combining Complementary Multimodal Information in Visual Language Models (Bai & Pezzelle, EvalMG 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.evalmg-1.3.pdf