Preserving Pre-trained Representation Space: On Effectiveness of Prefix-tuning for Large Multi-modal Models

Donghoon Kim, Gusang Lee, Kyuhong Shim, Byonghyo Shim


Abstract
Recently, we have observed that Large Multi-modal Models (LMMs) are revolutionizing the way machines interact with the world, unlocking new possibilities across various multi-modal applications. To adapt LMMs for downstream tasks, parameter-efficient fine-tuning (PEFT) which only trains additional prefix tokens or modules, has gained popularity. Nevertheless, there has been little analysis of how PEFT works in LMMs. In this paper, we delve into the strengths and weaknesses of each tuning strategy, shifting the focus from the efficiency typically associated with these approaches. We first discover that model parameter tuning methods such as LoRA and Adapters, distort the feature representation space learned during pre-training, limiting the full utilization of pre-trained knowledge. We also demonstrate that prefix-tuning excels at preserving the representation space, despite of its lower performance on downstream tasks. These findings suggest a simple two-step PEFT strategy called Prefix-Tuned PEFT (PT-PEFT), which successively performs prefix-tuning and then other PEFT (i.e., Adapter, LoRA), combines the benefits of both. Experimental results show that PT-PEFT not only improves performance in image captioning and visual question answering compared to vanilla PEFT methods but also helps preserve the representation space of the four pre-trained models.
Anthology ID:
2024.findings-emnlp.44
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
797–819
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.44
DOI:
Bibkey:
Cite (ACL):
Donghoon Kim, Gusang Lee, Kyuhong Shim, and Byonghyo Shim. 2024. Preserving Pre-trained Representation Space: On Effectiveness of Prefix-tuning for Large Multi-modal Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 797–819, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Preserving Pre-trained Representation Space: On Effectiveness of Prefix-tuning for Large Multi-modal Models (Kim et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.44.pdf
Software:
 2024.findings-emnlp.44.software.zip