CONSTRUCTURE: Benchmarking CONcept STRUCTUre REasoning for Multimodal Large Language Models

Zhiwei Zha, Xiangru Zhu, Yuanyi Xu, Chenghua Huang, Jingping Liu, Zhixu Li, Xuwu Wang, Yanghua Xiao, Bei Yang, Xiaoxiao Xu


Abstract
Multimodal Large Language Models (MLLMs) have shown promising results in various tasks, but their ability to perceive the visual world with deep, hierarchical understanding similar to humans remains uncertain. To address this gap, we introduce CONSTRUCTURE, a novel concept-level benchmark to assess MLLMs’ hierarchical concept understanding and reasoning abilities. Our goal is to evaluate MLLMs across four key aspects: 1) Understanding atomic concepts at different levels of abstraction; 2) Performing upward abstraction reasoning across concepts; 3) Achieving downward concretization reasoning across concepts; and 4) Conducting multi-hop reasoning between sibling or common ancestor concepts. Our findings indicate that even state-of-the-art multimodal models struggle with concept structure reasoning (e.g., GPT-4o averages a score of 62.1%). We summarize key findings of MLLMs in concept structure reasoning evaluation. Morever, we provide key insights from experiments using CoT prompting and fine-tuning to enhance their abilities.
Anthology ID:
2024.findings-emnlp.285
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4954–4968
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.285
DOI:
Bibkey:
Cite (ACL):
Zhiwei Zha, Xiangru Zhu, Yuanyi Xu, Chenghua Huang, Jingping Liu, Zhixu Li, Xuwu Wang, Yanghua Xiao, Bei Yang, and Xiaoxiao Xu. 2024. CONSTRUCTURE: Benchmarking CONcept STRUCTUre REasoning for Multimodal Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4954–4968, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
CONSTRUCTURE: Benchmarking CONcept STRUCTUre REasoning for Multimodal Large Language Models (Zha et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.285.pdf
Data:
 2024.findings-emnlp.285.data.zip