Hala Technical Report Building Arabic-Centric Instruction & Translation Models at Scale

Hasan Abed Al Kader Hammoud, Mohamad Bilal Zbib, Bernard Ghanem


Abstract
We present HALA, a family of Arabic-centric instruction and translation models built with our translate-and-tune pipeline. We first compress a strong AR↔EN teacher to FP8 (yielding ~2× higher throughput with no quality loss) and use it to create high-fidelity bilingual supervision. A lightweight language model LFM2–1.2B is then fine-tuned on this data and used to translate high-quality English instruction sets into Arabic, producing a million-scale corpus tailored to instruction following. We train HALA models at 350M, 700M, 1.2B, and 9B parameters, and apply slerp merging to balance Arabic specialization with base-model strengths. On Arabic-centric benchmarks, HALA achieves state-of-the-art results within both the "nano" (≤2B) and "small" (7–9B) categories, outperforming their bases. We are committed to release models, data, evaluation, and recipes to accelerate research in Arabic NLP.
Anthology ID:
2026.abjadnlp-1.32
Volume:
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Month:
March
Year:
2026
Address:
Rabat, Morocco
Venues:
AbjadNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
236–244
Language:
URL:
https://aclanthology.org/2026.abjadnlp-1.32/
DOI:
Bibkey:
Cite (ACL):
Hasan Abed Al Kader Hammoud, Mohamad Bilal Zbib, and Bernard Ghanem. 2026. Hala Technical Report Building Arabic-Centric Instruction & Translation Models at Scale. In Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script, pages 236–244, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Hala Technical Report Building Arabic-Centric Instruction & Translation Models at Scale (Hammoud et al., AbjadNLP 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.abjadnlp-1.32.pdf