Jiyang Qiu
2024
On the Robustness of Editing Large Language Models
Xinbei Ma
|
Tianjie Ju
|
Jiyang Qiu
|
Zhuosheng Zhang
|
Hai Zhao
|
Lifeng Liu
|
Yulong Wang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates. Model editing enables the manipulation of specific knowledge memories and the behavior of language generation without retraining. However, the robustness of model editing remains an open question. This work seeks to understand the strengths and limitations of editing methods, facilitating practical applications of communicative AI. We focus on three key research questions. RQ1: Can edited LLMs behave consistently resembling communicative AI in realistic situations? RQ2: To what extent does the rephrasing of prompts lead LLMs to deviate from the edited knowledge memory? RQ3: Which knowledge features are correlated with the performance and robustness of editing? Our empirical studies uncover a substantial disparity between existing editing methods and the practical application of LLMs. On rephrased prompts that are flexible but common in realistic applications, the performance of editing experiences a significant decline. Further analysis shows that more popular knowledge is memorized better, easier to recall, and more challenging to edit effectively.
Search
Co-authors
- Xinbei Ma 1
- Tianjie Ju 1
- Zhuosheng Zhang 1
- Hai Zhao 1
- Lifeng Liu 1
- show all...