DueT: Image-Text Contrastive Transfer Learning with Dual-adapter Tuning

Taku Hasegawa, Kyosuke Nishida, Koki Maeda, Kuniko Saito


Abstract
This paper presents DueT, a novel transfer learning method for vision and language models built by contrastive learning. In DueT, adapters are inserted into the image and text encoders, which have been initialized using models pre-trained on uni-modal corpora and then frozen. By training only these adapters, DueT enables efficient learning with a reduced number of trainable parameters. Moreover, unlike traditional adapters, those in DueT are equipped with a gating mechanism, enabling effective transfer and connection of knowledge acquired from pre-trained uni-modal encoders while preventing catastrophic forgetting. We report that DueT outperformed simple fine-tuning, the conventional method fixing only the image encoder and training only the text encoder, and the LoRA-based adapter method in accuracy and parameter efficiency for 0-shot image and text retrieval in both English and Japanese domains.
Anthology ID:
2023.emnlp-main.839
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13607–13624
Language:
URL:
https://aclanthology.org/2023.emnlp-main.839
DOI:
10.18653/v1/2023.emnlp-main.839
Bibkey:
Cite (ACL):
Taku Hasegawa, Kyosuke Nishida, Koki Maeda, and Kuniko Saito. 2023. DueT: Image-Text Contrastive Transfer Learning with Dual-adapter Tuning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13607–13624, Singapore. Association for Computational Linguistics.
Cite (Informal):
DueT: Image-Text Contrastive Transfer Learning with Dual-adapter Tuning (Hasegawa et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.839.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.839.mp4