Parameter-Efficient Fine-Tuning: Is There An Optimal Subset of Parameters to Tune?

Max Ploner, Alan Akbik


Abstract
The ever-growing size of pretrained language models (PLM) presents a significant challenge for efficiently fine-tuning and deploying these models for diverse sets of tasks within memory-constrained environments.In light of this, recent research has illuminated the possibility of selectively updating only a small subset of a model’s parameters during the fine-tuning process.Since no new parameters or modules are added, these methods retain the inference speed of the original model and come at no additional computational cost. However, an open question pertains to which subset of parameters should best be tuned to maximize task performance and generalizability. To investigate, this paper presents comprehensive experiments covering a large spectrum of subset selection strategies. We comparatively evaluate their impact on model performance as well as the resulting model’s capability to generalize to different tasks.Surprisingly, we find that the gains achieved in performance by elaborate selection strategies are, at best, marginal when compared to the outcomes obtained by tuning a random selection of parameter subsets. Our experiments also indicate that selection-based tuning impairs generalizability to new tasks.
Anthology ID:
2024.findings-eacl.122
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1743–1759
Language:
URL:
https://aclanthology.org/2024.findings-eacl.122
DOI:
Bibkey:
Cite (ACL):
Max Ploner and Alan Akbik. 2024. Parameter-Efficient Fine-Tuning: Is There An Optimal Subset of Parameters to Tune?. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1743–1759, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Parameter-Efficient Fine-Tuning: Is There An Optimal Subset of Parameters to Tune? (Ploner & Akbik, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.122.pdf