MicroEdit: Neuron-level Knowledge Disentanglement and Localization in Lifelong Model Editing

Shiqi Wang, Qi Wang, Runliang Niu, He Kong, Yi Chang


Abstract
Large language models (LLMs) require continual knowledge updates to keep pace with the evolving world. While various model editing methods have been proposed, most face critical challenges in the context of lifelong learning due to two fundamental limitations: (1) Edit Overshooting - parameter updates intended for a specific fact spill over to unrelated regions, causing interference with previously retained knowledge; and (2) Knowledge Entanglement - polysemantic neurons’ overlapping encoding of multiple concepts makes it difficult to isolate and edit a single fact. In this paper, we propose MicroEdit, a neuron-level editing method that performs minimal and controlled interventions within LLMs. By leveraging a sparse autoencoder (SAE), MicroEdit disentangles knowledge representations and activates only a minimal set of necessary neurons for precise parameter updates. This targeted design enables fine-grained control over the editing scope, effectively mitigating interference and preserving unrelated knowledge. Extensive experiments show that MicroEdit outperforms prior methods and robustly handles lifelong knowledge editing across QA and Hallucination settings on LLaM and Mistral.
Anthology ID:
2025.emnlp-main.1719
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33870–33884
Language:
URL:
https://aclanthology.org/2025.emnlp-main.1719/
DOI:
Bibkey:
Cite (ACL):
Shiqi Wang, Qi Wang, Runliang Niu, He Kong, and Yi Chang. 2025. MicroEdit: Neuron-level Knowledge Disentanglement and Localization in Lifelong Model Editing. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 33870–33884, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
MicroEdit: Neuron-level Knowledge Disentanglement and Localization in Lifelong Model Editing (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.emnlp-main.1719.pdf
Checklist:
 2025.emnlp-main.1719.checklist.pdf