COMEM: In-Context Retrieval-Augmented Mass-Editing Memory in Large Language Models

Shanbao Qiao, Xuebing Liu, Seung-Hoon Na


Abstract
Noting that world knowledge continuously evolves over time, large language models (LLMs) need to be properly adjusted by performing the “knowledge editing”, which involves updating outdated information or correcting false information. To achieve reliable and “massive” editing capabilities in terms of generalization and specificity, this paper proposes a unified knowledge editing method called in-COntext retrieval-augmented Mass-Editing Memory (COMEM), which combines two types of editing approaches: parameter updating and in-context knowledge editing (IKE). In particular, COMEM incorporates retrieval-augmented IKE, a novel extension of IKE designed for massive editing tasks, based on an updating-aware demonstration construction.Experimental results on the zsRE and CounterFact datasets demonstrate that COMEM outperforms all existing methods, achieving state-of-the-art performance. Our code is available at https://github.com/JoveReCode/COMEM.git.
Anthology ID:
2024.findings-naacl.151
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2333–2347
Language:
URL:
https://aclanthology.org/2024.findings-naacl.151
DOI:
Bibkey:
Cite (ACL):
Shanbao Qiao, Xuebing Liu, and Seung-Hoon Na. 2024. COMEM: In-Context Retrieval-Augmented Mass-Editing Memory in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2333–2347, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
COMEM: In-Context Retrieval-Augmented Mass-Editing Memory in Large Language Models (Qiao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.151.pdf
Copyright:
 2024.findings-naacl.151.copyright.pdf