James Chenhao Liang
2024
M2PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
Taowen Wang
|
Yiyang Liu
|
James Chenhao Liang
|
Junhan Zhao
|
Yiming Cui
|
Yuning Mao
|
Shaoliang Nie
|
Jiahao Liu
|
Fuli Feng
|
Zenglin Xu
|
Cheng Han
|
Lifu Huang
|
Qifan Wang
|
Dongfang Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains, with increasing emphasis on enhancing their zero-shot generalization capabilities for unseen tasks across various modalities. Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks. As the scale of MLLMs continues to grow, parameter-efficient finetuning becomes increasingly critical. However, most existing parameter-efficient approaches focus only on single modalities and often overlook the multimodal characteristics during finetuning. In this work, we introduce a novel Multimodal Prompt Tuning (M2PT) approach for efficient instruction tuning of MLLMs. M2PT effectively integrates visual and textual prompts into the vision encoder and language processor respectively during finetuning, facilitating the extraction and alignment of features across modalities. Empirical results on various multimodal evaluation datasets demonstrate the superior performance of our approach compared to several state-of-the-art baselines. A comprehensive set of ablation studies validates the effectiveness of our prompt design and the efficiency of our approach.
Search
Co-authors
- Taowen Wang 1
- Yiyang Liu 1
- Junhan Zhao 1
- Yiming Cui 1
- Yuning Mao 1
- show all...