PAELLA: Parameter-Efficient Lightweight Language-Agnostic Captioning Model

Rita Ramos, Emanuele Bugliarello, Bruno Martins, Desmond Elliott


Abstract
We introduce PAELLA, a Parameter-Efficient Lightweight Language-Agnostic image captioning model designed to be both parameter and data-efficient using retrieval augmentation. The model is trained by learning a small mapping network with 34M parameters between a pre-trained visual model and a multilingual language model that is conditioned on two types of input: (i) the image itself, and (ii) a set of retrieved captions in the target language. The retrieved examples play a key role in guiding the model to generate captions across languages. Through retrieval, the model can be lightweight in terms of the number of trainable parameters, which only exist in its mapping network, and also in the amount of multilingual training data that is required. Experiments on the XM3600 dataset, featuring 36 languages, show that PAELLA can outperform or compete against some models with 3–77× more learned parameters and 35–863× more data, particularly in low-resource languages. We also find that PAELLA can be trained on only monolingual data and still show strong zero-shot abilities in other languages.
Anthology ID:
2024.findings-naacl.225
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3549–3564
Language:
URL:
https://aclanthology.org/2024.findings-naacl.225
DOI:
Bibkey:
Cite (ACL):
Rita Ramos, Emanuele Bugliarello, Bruno Martins, and Desmond Elliott. 2024. PAELLA: Parameter-Efficient Lightweight Language-Agnostic Captioning Model. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3549–3564, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
PAELLA: Parameter-Efficient Lightweight Language-Agnostic Captioning Model (Ramos et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.225.pdf
Copyright:
 2024.findings-naacl.225.copyright.pdf