Building a Turkish Large Language Model via Continual Pre-Training and Parameter-Efficient Adaptation

Alperen Enes Bayar, Mert Ege, Gökhan Yurtalan, Alper Karamanlioglu, Berkan Demirel, Ramazan Gokberk Cinbis


Abstract
Large Language Models (LLMs) achieve strong performance on many tasks, but they still struggle with morphologically rich, low-resource languages such as Turkish. This difficulty stems from Turkish being an agglutinative language and underrepresented in multilingual training data, which causes current models to often fail at capturing its morphology, flexible word order, and formal registers. In this paper, we introduce MODA (Model Adapted for Domain Applications), a Turkish-specialized LLM built via a modular pipeline that combines continual pre-training, parameter-efficient fine-tuning, and model merging. Starting from Qwen2.5-7B as the base model, we first perform large-scale continual pre-training on a Turkish web corpus to improve grammatical and morphological representations. We then apply parameter-efficient supervised fine-tuning on task-oriented instruction data, and finally merge specialized variants into a single unified model. We evaluate MODA on TurkishMMLU, the Turkish subset of EXAMS, and TRCLAIM-19, where it consistently outperforms both the base and instruction-tuned Qwen2.5-7B models. Our results support a training strategy that explicitly separates linguistic acquisition from task alignment when adapting LLMs to morphologically rich, underrepresented languages under realistic hardware constraints.
Anthology ID:
2026.sigturk-1.17
Volume:
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Kemal Oflazer, Abdullatif Köksal, Onur Varol
Venues:
SIGTURK | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
209–219
Language:
URL:
https://aclanthology.org/2026.sigturk-1.17/
DOI:
Bibkey:
Cite (ACL):
Alperen Enes Bayar, Mert Ege, Gökhan Yurtalan, Alper Karamanlioglu, Berkan Demirel, and Ramazan Gokberk Cinbis. 2026. Building a Turkish Large Language Model via Continual Pre-Training and Parameter-Efficient Adaptation. In Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026), pages 209–219, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Building a Turkish Large Language Model via Continual Pre-Training and Parameter-Efficient Adaptation (Bayar et al., SIGTURK 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.sigturk-1.17.pdf