Mohammad Ismail Amro
2026
Parameter-Efficient Adaptation of Self-Supervised Models for Arabic Speech Recognition
Wafa Mohammed Alshehri | Wasfi G. Al-khatib | Mohammad Ismail Amro
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Wafa Mohammed Alshehri | Wasfi G. Al-khatib | Mohammad Ismail Amro
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Arabic speech recognition systems face distinct challenges due to the language’s complex morphology and dialectal variations. Self-supervised models (SSL) like XLS-R have shown promising results, but their size with over than 300 million of parameters, makes fine-tuning computationally expensive. In this work, we present the first comparative study of parameter-efficient fine-tuning (PEFT), specifically LoRA and DoRA, applied to XLS-R for Arabic ASR. We evaluate on the newly released Common Voice Arabic V24.0 dataset, establishing new benchmarks. Our full fine-tuning achieves state-of-the-art results among XLS-R-based models with 23.03% Word Error Rate (WER). In our experiments, LoRA achieved a 36.10% word error rate (WER) while training just 2% of the model’s parameters. DoRA reached 45.20% WER in initial experiments. We analyze the trade-offs between accuracy and efficiency, offering practical guidance for developing Arabic ASR systems when computational resources are limited. The models and code are publicly available.