Can We Continually Edit Language Models? On the Knowledge Attenuation in Sequential Model Editing

Qi Li, Xiaowen Chu


Abstract
Model editing has become a promising method for precisely and effectively updating knowledge in language models. In this paper, we investigate knowledge attenuation, in which the retention of updated knowledge within the language model decreases as the number of edits increases after sequential editing. Through empirical study, we discovered that existing editing methods generally suffer from knowledge attenuation. We attribute this phenomenon to two aspects: (1) redundant parameters interference and (2) update weight disentanglement. To this end, we propose the AdaPLE method. It not only mitigates the knowledge attenuation issue but also improves the performance on existing benchmarks. To the best of our knowledge, we are the first to investigate the cause and mitigation of knowledge attenuation in sequential LLM editing.
Anthology ID:
2024.findings-acl.323
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5438–5455
Language:
URL:
https://aclanthology.org/2024.findings-acl.323
DOI:
Bibkey:
Cite (ACL):
Qi Li and Xiaowen Chu. 2024. Can We Continually Edit Language Models? On the Knowledge Attenuation in Sequential Model Editing. In Findings of the Association for Computational Linguistics ACL 2024, pages 5438–5455, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Can We Continually Edit Language Models? On the Knowledge Attenuation in Sequential Model Editing (Li & Chu, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.323.pdf