Mert Ege
2026
Building a Turkish Large Language Model via Continual Pre-Training and Parameter-Efficient Adaptation
Alperen Enes Bayar | Mert Ege | Gökhan Yurtalan | Alper Karamanlioglu | Berkan Demirel | Ramazan Gokberk Cinbis
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Alperen Enes Bayar | Mert Ege | Gökhan Yurtalan | Alper Karamanlioglu | Berkan Demirel | Ramazan Gokberk Cinbis
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Large Language Models (LLMs) achieve strong performance on many tasks, but they still struggle with morphologically rich, low-resource languages such as Turkish. This difficulty stems from Turkish being an agglutinative language and underrepresented in multilingual training data, which causes current models to often fail at capturing its morphology, flexible word order, and formal registers. In this paper, we introduce MODA (Model Adapted for Domain Applications), a Turkish-specialized LLM built via a modular pipeline that combines continual pre-training, parameter-efficient fine-tuning, and model merging. Starting from Qwen2.5-7B as the base model, we first perform large-scale continual pre-training on a Turkish web corpus to improve grammatical and morphological representations. We then apply parameter-efficient supervised fine-tuning on task-oriented instruction data, and finally merge specialized variants into a single unified model. We evaluate MODA on TurkishMMLU, the Turkish subset of EXAMS, and TRCLAIM-19, where it consistently outperforms both the base and instruction-tuned Qwen2.5-7B models. Our results support a training strategy that explicitly separates linguistic acquisition from task alignment when adapting LLMs to morphologically rich, underrepresented languages under realistic hardware constraints.