Yun Fu
2024
Advancing Vision-Language Models with Adapter Ensemble Strategies
Yue Bai
|
Handong Zhao
|
Zhe Lin
|
Ajinkya Kale
|
Jiuxiang Gu
|
Tong Yu
|
Sungchul Kim
|
Yun Fu
Findings of the Association for Computational Linguistics: EMNLP 2024
CLIP revolutes vision-language pretraining by using contrastive learning on paired web data. However, the sheer size of these pretrained models makes full-model finetuning exceedingly costly. One common solution is the “adapter”, which finetunes a few additional parameters while freezing the backbone. It harnesses the heavy-duty backbone while offering a light finetuning for small downstream tasks. This synergy prompts us to explore the potential of augmenting large-scale backbones with traditional machine learning techniques. Often employed in traditional fields and overlooked in the large-scale era, these techniques could provide valuable enhancements. Herein, we delve into the “adapter ensembles” in the realm of large-scale pretrained vision-language models. We begin with a proof-of-concept study to establish the efficacy of combining multiple adapters. We then present extensive evidence showing these ensembles excel in a variety of settings, particularly when employing a Multi-Scale Attention (MSA) approach thoughtfully integrated into the ensemble framework. We further incorporate the LoRA to mitigate the additional parameter burden. We focus on vision-language retrieval, using different backbones under constraints of minimal data, parameters, and finetuning budgets. This research paves the way for a synergistic blend of traditional, yet effective, strategies with modern large-scale networks.
Search
Co-authors
- Yue Bai 1
- Handong Zhao 1
- Zhe Lin 1
- Ajinkya Kale 1
- Jiuxiang Gu 1
- show all...