Let’s Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model

Haoyun Xu, Runzhe Zhan, Yingpeng Ma, Derek F. Wong, Lidia S. Chao


Abstract
Large Language Models (LLMs) are composed of neurons that exhibit various behaviors and roles, which become increasingly diversified as models scale. Recent studies have revealed that not all neurons are active across different datasets, and this sparsity correlates positively with the task-specific ability, leading to advancements in model pruning and training efficiency. Traditional fine-tuning methods engage all parameters of LLMs, which is computationally expensive and may not be necessary. In contrast, Parameter-Efficient Fine-Tuning (PEFT) approaches aim to minimize the number of trainable parameters, yet they still operate at a relatively macro scale (e.g., layer-level). We introduce Neuron-Level Fine-Tuning (NeFT), a novel approach that refines the granularity of parameter training down to the individual neuron, enabling a more parameter-efficient fine-tuning model. The experimental results show that NeFT not only exceeded the performance of full-parameter fine-tuning and PEFT but also provided insights into the analysis of neurons. Our code and data are available at: https://github.com/NLP2CT/NeFT.
Anthology ID:
2025.coling-main.630
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9393–9406
Language:
URL:
https://aclanthology.org/2025.coling-main.630/
DOI:
Bibkey:
Cite (ACL):
Haoyun Xu, Runzhe Zhan, Yingpeng Ma, Derek F. Wong, and Lidia S. Chao. 2025. Let’s Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9393–9406, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Let’s Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model (Xu et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.630.pdf