LoRAN: Improved Low-Rank Adaptation by a Non-Linear Transformation

Yinqiao Li, Linqi Song, Hanxu Hou


Abstract
In this paper, we study parameter-efficient fine-tuning methods for large pre-trained models. Specifically, we improve LoRA approaches to alleviate the performance loss from the constrained adapter by introducing a non-linear transformation (call it LoRAN). For a better adaptation, we also design a new non-linear function to appropriately fit the accumulated weight updates. We test our method in multiple advanced large language models. Experimental results show that our LoRAN significantly outperforms a strong baseline on SAMSum and 20 Newsgroups tasks. Moreover, when a lower rank is applied, our approach even yields a 1.95-point improvement in the classification task.
Anthology ID:
2024.findings-emnlp.177
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3134–3143
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.177
DOI:
Bibkey:
Cite (ACL):
Yinqiao Li, Linqi Song, and Hanxu Hou. 2024. LoRAN: Improved Low-Rank Adaptation by a Non-Linear Transformation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3134–3143, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
LoRAN: Improved Low-Rank Adaptation by a Non-Linear Transformation (Li et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.177.pdf
Software:
 2024.findings-emnlp.177.software.zip