The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse

Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng


Abstract
Although model editing has shown promise in revising knowledge in Large Language Models (LLMs), its impact on the inherent capabilities of LLMs is often overlooked. In this work, we reveal a critical phenomenon: even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks. However, benchmarking LLMs after each edit, while necessary to prevent such collapses, is impractically time-consuming and resource-intensive. To mitigate this, we propose using perplexity as a surrogate metric, validated by extensive experiments demonstrating changes in an edited model’s perplexity are strongly correlated with its downstream task performances. We further conduct an in-depth study on sequential editing, a practical setting for real-world scenarios, across various editing methods and LLMs, focusing on hard cases from our previous single edit studies. The results indicate that nearly all examined editing methods result in model collapse after only few edits. To facilitate further research, we have utilized GPT-3.5 to develop a new dataset, HardEdit, based on those hard cases. This dataset aims to establish the foundation for pioneering research in reliable model editing and the mechanisms underlying editing-induced model collapse. We hope this work can draw the community’s attention to the potential risks inherent in model editing practices.
Anthology ID:
2024.findings-acl.322
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5419–5437
Language:
URL:
https://aclanthology.org/2024.findings-acl.322
DOI:
10.18653/v1/2024.findings-acl.322
Bibkey:
Cite (ACL):
Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, and Xueqi Cheng. 2024. The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse. In Findings of the Association for Computational Linguistics: ACL 2024, pages 5419–5437, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse (Yang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.322.pdf