Propulsion: Steering LLM with Tiny Fine-Tuning

Md Kowsher, Nusrat Jahan Prottasha, Prakash Bhat


Abstract
The rapid advancements in Large Language Models (LLMs) have revolutionized natural language processing (NLP) and adjacent fields, yet fine-tuning these models for specific tasks remains computationally expensive and risks degrading pre-learned features. To address these challenges, we propose Propulsion, a novel parameter-efficient fine-tuning (PEFT) method designed to optimize task-specific performance while drastically reducing computational overhead. Inspired by the concept of controlled adjustments in physical motion, Propulsion selectively re-scales specific dimensions of a pre-trained model, guiding output predictions toward task objectives without modifying the model’s parameters. By introducing lightweight, trainable Propulsion parameters at the pre-trained layer, we minimize the number of parameters updated during fine-tuning, thus preventing the overfitting or overwriting of existing knowledge. Our theoretical analysis, supported by Neural Tangent Kernel (NTK) theory, shows that Propulsion approximates the performance of full fine-tuning with far fewer trainable parameters. Empirically, Propulsion reduces the parameter count from 355.3 million to a mere 0.086 million—achieving over a 10x reduction compared to standard approaches like LoRA—while maintaining competitive performance across benchmarks.
Anthology ID:
2025.coling-main.506
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7569–7597
Language:
URL:
https://aclanthology.org/2025.coling-main.506/
DOI:
Bibkey:
Cite (ACL):
Md Kowsher, Nusrat Jahan Prottasha, and Prakash Bhat. 2025. Propulsion: Steering LLM with Tiny Fine-Tuning. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7569–7597, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Propulsion: Steering LLM with Tiny Fine-Tuning (Kowsher et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.506.pdf