Md Kowsher


2025

pdf bib
Propulsion: Steering LLM with Tiny Fine-Tuning
Md Kowsher | Nusrat Jahan Prottasha | Prakash Bhat
Proceedings of the 31st International Conference on Computational Linguistics

The rapid advancements in Large Language Models (LLMs) have revolutionized natural language processing (NLP) and adjacent fields, yet fine-tuning these models for specific tasks remains computationally expensive and risks degrading pre-learned features. To address these challenges, we propose Propulsion, a novel parameter-efficient fine-tuning (PEFT) method designed to optimize task-specific performance while drastically reducing computational overhead. Inspired by the concept of controlled adjustments in physical motion, Propulsion selectively re-scales specific dimensions of a pre-trained model, guiding output predictions toward task objectives without modifying the model’s parameters. By introducing lightweight, trainable Propulsion parameters at the pre-trained layer, we minimize the number of parameters updated during fine-tuning, thus preventing the overfitting or overwriting of existing knowledge. Our theoretical analysis, supported by Neural Tangent Kernel (NTK) theory, shows that Propulsion approximates the performance of full fine-tuning with far fewer trainable parameters. Empirically, Propulsion reduces the parameter count from 355.3 million to a mere 0.086 million—achieving over a 10x reduction compared to standard approaches like LoRA—while maintaining competitive performance across benchmarks.

2023

pdf bib
Contrastive Learning for Universal Zero-Shot NLI with Cross-Lingual Sentence Embeddings
Md Kowsher | Md. Shohanur Islam Sobuj | Nusrat Jahan Prottasha | Mohammad Shamsul Arefin | Yasuhiko Morimoto
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)