Forget for Get: A Lightweight Two-phase Gradient Method for Knowledge Editing in Large Language Models

Yanhong Li, Min Yang, Xiping Hu, Chengming Li


Abstract
Recent studies have highlighted the remarkable knowledge retention capabilities of Large Language Models (LLMs) like GPT-4, while simultaneously revealing critical limitations in maintaining knowledge currency and accuracy. Existing knowledge editing methodologies, designed to update specific factual information without compromising general model performance, often encounter two fundamental challenges: parameter conflict during knowledge overwriting and excessive computational overhead. In this paper, we introduce ForGet (Forget for Get), a novel approach grounded in the principle of “forgetting before learning”. By pinpointing the location within the LLM that corresponds to the target knowledge, we first erase the outdated knowledge and then insert the new knowledge at this precise spot. ForGet is the first work to leverage a two-phase gradient-based process for knowledge editing, offering a lightweight solution that also delivers superior results. Experimental findings show that our method achieves more effective knowledge editing at a lower cost compared to previous techniques across various base models.
Anthology ID:
2025.findings-emnlp.402
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7604–7623
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.402/
DOI:
Bibkey:
Cite (ACL):
Yanhong Li, Min Yang, Xiping Hu, and Chengming Li. 2025. Forget for Get: A Lightweight Two-phase Gradient Method for Knowledge Editing in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 7604–7623, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Forget for Get: A Lightweight Two-phase Gradient Method for Knowledge Editing in Large Language Models (Li et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.402.pdf
Checklist:
 2025.findings-emnlp.402.checklist.pdf