Sparsity May Be All You Need: Sparse Random Parameter Adaptation

Jesus Rios, Pierre Dognin, Ronny Luss, Karthikeyan Natesan Ramamurthy


Abstract
Full fine-tuning of large language models for alignment and task adaptation has become prohibitively expensive as models have grown in size. Parameter-Efficient Fine-Tuning (PEFT) methods aim at significantly reducing the computational and memory resources needed for fine-tuning these models by only training on a small number of parameters instead of all model parameters. Currently, the most popular PEFT method is the Low-Rank Adaptation (LoRA), which freezes the parameters of the model and introduces a small set of trainable parameters in the form of low-rank matrices. We propose simply reducing the number of trainable parameters by randomly selecting a small proportion of the model parameters to train on, while fixing all other parameters, without any additional prior assumptions such as low-rank structures. In this paper, we compare the efficiency and performance of our proposed approach to other PEFT methods as well as full parameter fine-tuning. We find our method to be competitive with LoRA when using a similar number of trainable parameters. Our findings suggest that what truly matters for a PEFT technique to perform well is not necessarily the specific adapter structure, but rather the number of trainable parameters being used.
Anthology ID:
2025.findings-emnlp.1013
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18650–18666
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1013/
DOI:
Bibkey:
Cite (ACL):
Jesus Rios, Pierre Dognin, Ronny Luss, and Karthikeyan Natesan Ramamurthy. 2025. Sparsity May Be All You Need: Sparse Random Parameter Adaptation. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 18650–18666, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Sparsity May Be All You Need: Sparse Random Parameter Adaptation (Rios et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1013.pdf
Checklist:
 2025.findings-emnlp.1013.checklist.pdf