Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models

Yongjin Yang, Jongwoo Ko, Se-Young Yun


Abstract
Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image classification. Recently, the use of prompts or adapters for efficient transfer learning (ETL) has gained significant attention for effectively adapting to downstream tasks. However, previous studies have overlooked the challenge of varying transfer difficulty of downstream tasks. In this paper, we empirically analyze how each ETL method behaves with respect to transfer difficulty. Our observations indicate that utilizing vision prompts and text adapters is crucial for adaptability and generalizability in domains with high difficulty. Also, by applying an adaptive ensemble approach that integrates task-adapted VLMs with pre-trained VLMs and strategically leverages more general knowledge in low-difficulty and less in high-difficulty domains, we consistently enhance performance across both types of domains. Based on these observations, we propose an adaptive ensemble method that combines visual prompts and text adapters with pre-trained VLMs, tailored by transfer difficulty, to achieve optimal performance for any target domain. Upon experimenting with extensive benchmarks, our method consistently outperforms all baselines, particularly on unseen tasks, demonstrating its effectiveness.
Anthology ID:
2024.emnlp-main.124
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2066–2085
Language:
URL:
https://aclanthology.org/2024.emnlp-main.124
DOI:
Bibkey:
Cite (ACL):
Yongjin Yang, Jongwoo Ko, and Se-Young Yun. 2024. Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2066–2085, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models (Yang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.124.pdf
Software:
 2024.emnlp-main.124.software.zip