Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based Alignment

Zhen Zhang, Jialu Wang, Xin Wang


Abstract
Pre-trained vision and language models such as CLIP have witnessed remarkable success in connecting images and texts with a primary focus on English texts. Despite recent efforts to extend CLIP to support other languages, disparities in performance among different languages have been observed due to uneven resource availability. Additionally, current cross-lingual transfer methods of those pre-trained models would consume excessive resources for a large number of languages. Therefore, we propose a new parameter-efficient cross-lingual transfer learning framework that utilizes a translation-based alignment method to mitigate multilingual disparities and explores parameter-efficient fine-tuning methods for parameter-efficient cross-lingual transfer. Extensive experiments on XTD and Multi30K datasets, covering 11 languages under zero-shot, few-shot, and full-dataset learning scenarios, show that our framework significantly reduces the multilingual disparities among languages and improves cross-lingual transfer results, especially in low-resource scenarios, while only keeping and fine-tuning an extremely small number of parameters compared to the full model (e.g., Our framework only requires 0.16% additional parameters of a full-model for each language in the few-shot learning scenario).
Anthology ID:
2023.findings-emnlp.483
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7258–7268
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.483
DOI:
10.18653/v1/2023.findings-emnlp.483
Bibkey:
Cite (ACL):
Zhen Zhang, Jialu Wang, and Xin Wang. 2023. Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based Alignment. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7258–7268, Singapore. Association for Computational Linguistics.
Cite (Informal):
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based Alignment (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.483.pdf