Beyond Visual Understanding Introducing PARROT-360V for Vision Language Model Benchmarking

Harsha Vardhan Khurdula, Basem Rizk, Indus Khaitan


Abstract
Current benchmarks for evaluating Vision Language Models (VLMs) often fall short in thoroughly assessing these models’ abilities to understand and process complex visual and textual content. They typically focus on simple tasks that do not require deep reasoning or the integration of multiple data modalities to solve an original problem. To address this gap, we introduce the PARROT-360V Benchmark, a novel and comprehensive benchmark featuring 2487 challenging visual puzzles designed to test VLMs on complex visual reasoning tasks. We evaluated leading models—GPT-4o, Claude-3.5-Sonnet, and Gemini-1.5-Pro—using PARROT-360V to assess their capabilities in combining visual clues with language skills to solve tasks in a manner akin to human problem-solving. Our findings reveal a notable performance gap: state-of-the-art models scored between 28% to 56% on our benchmark, significantly lower than their performance on popular benchmarks. This underscores the limitations of current VLMs in handling complex, multi-step reasoning tasks and highlights the need for more robust evaluation frameworks to advance the field.
Anthology ID:
2025.coling-industry.6
Volume:
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert, Kareem Darwish, Apoorv Agarwal
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
68–75
Language:
URL:
https://aclanthology.org/2025.coling-industry.6/
DOI:
Bibkey:
Cite (ACL):
Harsha Vardhan Khurdula, Basem Rizk, and Indus Khaitan. 2025. Beyond Visual Understanding Introducing PARROT-360V for Vision Language Model Benchmarking. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 68–75, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Beyond Visual Understanding Introducing PARROT-360V for Vision Language Model Benchmarking (Khurdula et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-industry.6.pdf