Mingchen Cai
2025
What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning
Yifan Du
|
Hangyu Guo
|
Kun Zhou
|
Wayne Xin Zhao
|
Jinpeng Wang
|
Chuyuan Wang
|
Mingchen Cai
|
Ruihua Song
|
Ji-Rong Wen
Proceedings of the 31st International Conference on Computational Linguistics
Visual instruction tuning is crucial for enhancing the zero-shot generalization capability of Multi-modal Large Language Models (MLLMs). In this paper, we aim to investigate a fundamental question: “what makes for good visual instructions”. Through a comprehensive empirical study, we find that instructions focusing on complex visual reasoning tasks are particularly effective in improving the performance of MLLMs, with results correlating to instruction complexity. Based on this insight, we develop a systematic approach to automatically create high-quality complex visual reasoning instructions. Our approach employs a synthesize-complicate-reformulate paradigm, leveraging multiple stages to gradually increase the complexity of the instructions while guaranteeing quality. Based on this approach, we create the ComVint dataset with 32K examples, and fine-tune four MLLMs on it. Experimental results consistently demonstrate the enhanced performance of all compared MLLMs, such as a 27.86% and 27.60% improvement for LLaVA on MME-Perception and MME-Cognition, respectively. Our code and data are publicly available at the link: https://github.com/RUCAIBox/ComVint.
Search
Fix data
Co-authors
- Yifan Du 1
- Hangyu Guo 1
- Ruihua Song 1
- Jinpeng Wang 1
- Chuyuan Wang 1
- show all...