An Examination of the Compositionality of Large Generative Vision-Language Models

Teli Ma, Rong Li, Junwei Liang


Abstract
With the success of Large Language Models (LLMs), many Generative Vision-Language Models (GVLMs) have been constructed via multimodal instruction tuning. However, the performance of GVLMs in multimodal compositional reasoning remains under-explored. In this paper, we examine both the evaluation metrics ( VisualGPTScore, etc.) and current benchmarks for evaluating the compositionality of GVLMs. We identify the syntactical bias in current benchmarks, which is exploited by the linguistic capability of GVLMs. The bias renders VisualGPTScore an insufficient metric for assessing GVLMs. To combat this, we first introduce a **SyntaxBias Score**, leveraging LLMs to quantify such bias for mitigation. A challenging new task is subsequently added to evaluate the robustness of GVLMs against inherent inclination toward syntactical correctness. Using the bias-mitigated datasets and the new task, we propose a novel benchmark, namely **S**ynt**A**ctically **DE**-biased benchmark (SADE). Our study provides an unbiased benchmark for the compositionality of GVLMs, facilitating future research in this direction. Code and dataset are available at https://github.com/TeleeMa/SADE.
Anthology ID:
2024.naacl-long.39
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
692–705
Language:
URL:
https://aclanthology.org/2024.naacl-long.39
DOI:
Bibkey:
Cite (ACL):
Teli Ma, Rong Li, and Junwei Liang. 2024. An Examination of the Compositionality of Large Generative Vision-Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 692–705, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
An Examination of the Compositionality of Large Generative Vision-Language Models (Ma et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.39.pdf
Copyright:
 2024.naacl-long.39.copyright.pdf