Hanxu Hou
2024
LoRAN: Improved Low-Rank Adaptation by a Non-Linear Transformation
Yinqiao Li
|
Linqi Song
|
Hanxu Hou
Findings of the Association for Computational Linguistics: EMNLP 2024
In this paper, we study parameter-efficient fine-tuning methods for large pre-trained models. Specifically, we improve LoRA approaches to alleviate the performance loss from the constrained adapter by introducing a non-linear transformation (call it LoRAN). For a better adaptation, we also design a new non-linear function to appropriately fit the accumulated weight updates. We test our method in multiple advanced large language models. Experimental results show that our LoRAN significantly outperforms a strong baseline on SAMSum and 20 Newsgroups tasks. Moreover, when a lower rank is applied, our approach even yields a 1.95-point improvement in the classification task.
Search