Can We Edit Multimodal Large Language Models?

Siyuan Cheng, Bozhong Tian, Qingbin Liu, Xi Chen, Yongheng Wang, Huajun Chen, Ningyu Zhang


Abstract
In this paper, we focus on editing multimodal Large Language Models (LLMs). Compared to editing single-modal LLMs, multimodal model editing is more challenging, which demands a higher level of scrutiny and careful consideration in the editing process. To facilitate research in this area, we construct a new benchmark, dubbed MMEdit, for editing multimodal LLMs and establishing a suite of innovative metrics for evaluation. We conduct comprehensive experiments involving various model editing baselines and analyze the impact of editing different components for multimodal LLMs. Empirically, we notice that previous baselines can implement editing multimodal LLMs to some extent, but the effect is still barely satisfactory, indicating the potential difficulty of this task. We hope that our work can provide the NLP community with insights.
Anthology ID:
2023.emnlp-main.856
Original:
2023.emnlp-main.856v1
Version 2:
2023.emnlp-main.856v2
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13877–13888
Language:
URL:
https://aclanthology.org/2023.emnlp-main.856
DOI:
10.18653/v1/2023.emnlp-main.856
Bibkey:
Cite (ACL):
Siyuan Cheng, Bozhong Tian, Qingbin Liu, Xi Chen, Yongheng Wang, Huajun Chen, and Ningyu Zhang. 2023. Can We Edit Multimodal Large Language Models?. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13877–13888, Singapore. Association for Computational Linguistics.
Cite (Informal):
Can We Edit Multimodal Large Language Models? (Cheng et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.856.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.856.mp4