Maastricht University at AMIYA: Adapting LLMs for Dialectal Arabic using Fine-tuning and MBR Decoding

Abdulhai Alali, Abderrahmane Issam


Abstract
Large Language Models (LLMs) are becoming increasingly multilingual, supporting hundreds of languages especially high resource ones. Unfortunately, Dialect variations are still underrepresented due to limited data and linguistic variation. In this work, we adapt a pre-trained LLM to improve dialectal performance. Specifically, we use Low Rank Adaptation (LoRA) fine-tuning on monolingual and English–Dialect parallel data, adapter merging and dialect-aware MBR decoding to improve dialectal fidelity generation and translation. Experiments on Syrian, Moroccan, and Saudi Arabic show that merging and MBR improve dialectal fidelity while preserving semantic accuracy. This combination provides a compact and effective framework for robust dialectal Arabic generation.
Anthology ID:
2026.vardial-1.28
Volume:
Proceedings of the 13th Workshop on NLP for Similar Languages, Varieties and Dialects
Month:
March
Year:
2026
Address:
Rabat, Morocco
Venues:
VarDial | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
352–358
Language:
URL:
https://aclanthology.org/2026.vardial-1.28/
DOI:
Bibkey:
Cite (ACL):
Abdulhai Alali and Abderrahmane Issam. 2026. Maastricht University at AMIYA: Adapting LLMs for Dialectal Arabic using Fine-tuning and MBR Decoding. In Proceedings of the 13th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 352–358, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Maastricht University at AMIYA: Adapting LLMs for Dialectal Arabic using Fine-tuning and MBR Decoding (Alali & Issam, VarDial 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.vardial-1.28.pdf