An Exploration of Knowledge Editing for Arabic

Basel Mousi, Nadir Durrani, Fahim Dalvi


Abstract
While Knowledge Editing (KE) has been widely explored in English, its behavior in morphologically rich languages like Arabic remains underexamined. In this work, we present the first study of Arabic KE. We evaluate four methods (ROME, MEMIT, ICE, and LTE) on Arabic translations of the ZsRE and Counterfact benchmarks, analyzing both multilingual and cross-lingual settings. Our experiments on Llama-2-7B-chat show show that parameter-based methods struggle with cross-lingual generalization, while instruction-tuned methods perform more robustly. We extend Learning-To-Edit (LTE) to a multilingual setting and show that joint Arabic-English training improves both editability and transfer. We release Arabic KE benchmarks and multilingual training for LTE data to support future research.
Anthology ID:
2025.arabicnlp-main.34
Volume:
Proceedings of The Third Arabic Natural Language Processing Conference
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Kareem Darwish, Ahmed Ali, Ibrahim Abu Farha, Samia Touileb, Imed Zitouni, Ahmed Abdelali, Sharefah Al-Ghamdi, Sakhar Alkhereyf, Wajdi Zaghouani, Salam Khalifa, Badr AlKhamissi, Rawan Almatham, Injy Hamed, Zaid Alyafeai, Areeb Alowisheq, Go Inoue, Khalil Mrini, Waad Alshammari
Venue:
ArabicNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
417–424
Language:
URL:
https://aclanthology.org/2025.arabicnlp-main.34/
DOI:
Bibkey:
Cite (ACL):
Basel Mousi, Nadir Durrani, and Fahim Dalvi. 2025. An Exploration of Knowledge Editing for Arabic. In Proceedings of The Third Arabic Natural Language Processing Conference, pages 417–424, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
An Exploration of Knowledge Editing for Arabic (Mousi et al., ArabicNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.arabicnlp-main.34.pdf