Xi-He Qiu
2025
Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace
Jia-Chen Zhang
|
Yu-Jie Xiong
|
Chun-Ming Xia
|
Dong-Hai Zhu
|
Xi-He Qiu
Proceedings of the 31st International Conference on Computational Linguistics
This paper proposes a novel parameter-efficient fine-tuning method that combines the knowledge completion capability of deconvolution with the subspace learning ability, reducing the number of parameters required for fine-tuning by 8 times . Experimental results demonstrate that our method achieves superior training efficiency and performance compared to existing models.