Bilge Kaan Görür
2026
RAGTurk: Best Practices for Retrieval Augmented Generation in Turkish
Süha Kağan Köse | Mehmet Can Baytekin | Burak Aktaş | Bilge Kaan Görür | Evren Ayberk Munis | Deniz Yılmaz | Muhammed Yusuf Kartal | Cagri Toraman
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Süha Kağan Köse | Mehmet Can Baytekin | Burak Aktaş | Bilge Kaan Görür | Evren Ayberk Munis | Deniz Yılmaz | Muhammed Yusuf Kartal | Cagri Toraman
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Retrieval-Augmented Generation (RAG) enhances LLM factuality, yet design guidance remains English-centric, limiting insights for morphologically rich languages like Turkish. We address this by constructing a comprehensive Turkish RAG dataset derived from Turkish Wikipedia and CulturaX, comprising question-answer pairs and relevant passage chunks. We benchmark seven stages of the RAG pipeline—from query transformation and reranking to answer refinement—without task-specific fine-tuning. Our results show that complex methods like HyDE maximize accuracy (85%) that is considerably higher than the baseline (78.70%). Also a Pareto-optimal configuration using Cross-encoder Reranking and Context Augmentation achieves comparable performance (84.60%) with much lower cost. We further demonstrate that over-stacking generative modules can degrade performance by distorting morphological cues, whereas simple query clarification with robust reranking offers an effective solution.
BIRDTurk: Adaptation of the BIRD Text-to-SQL Dataset to Turkish
Burak Aktaş | Mehmet Can Baytekin | Süha Kağan Köse | Ömer İlbilgi | Elif Özge Yılmaz | Cagri Toraman | Bilge Kaan Görür
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Burak Aktaş | Mehmet Can Baytekin | Süha Kağan Köse | Ömer İlbilgi | Elif Özge Yılmaz | Cagri Toraman | Bilge Kaan Görür
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Text-to-SQL systems have achieved strong performance on English benchmarks, yet their behavior in morphologically rich, low-resource languages remains largely unexplored. We introduce BIRDTurk, the first Turkish adaptation of the BIRD benchmark, constructed through a controlled translation pipeline that adapts schema identifiers to Turkish while strictly preserving the logical structure and execution semantics of SQL queries and databases. Translation quality is validated on a sample size determined by the Central Limit Theorem to ensure 95% confidence, achieving 98.15% accuracy on human-evaluated samples. Using BIRDTurk, we evaluate inference-based prompting, agentic multi-stage reasoning, and supervised fine-tuning. Our results reveal that Turkish introduces consistent performance degradation–driven by both structural linguistic divergence and underrepresentation in LLM pretraining–while agentic reasoning demonstrates stronger cross-lingual robustness. Supervised fine-tuning remains challenging for standard multilingual baselines but scales effectively with modern instruction-tuned models. BIRDTurk provides a controlled testbed for cross-lingual Text-to-SQL evaluation under realistic database conditions. We release the training and development splits to support future research.
OCRTurk: A Comprehensive OCR Benchmark for Turkish
Deniz Yılmaz | Evren Ayberk Munis | Cagri Toraman | Süha Kağan Köse | Burak Aktaş | Mehmet Can Baytekin | Bilge Kaan Görür
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Deniz Yılmaz | Evren Ayberk Munis | Cagri Toraman | Süha Kağan Köse | Burak Aktaş | Mehmet Can Baytekin | Bilge Kaan Görür
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Document parsing is now widely used in applications, such as large-scale document digitization, retrieval-augmented generation, and domain-specific pipelines in healthcare and education. Benchmarking these models is crucial for assessing their reliability and practical robustness. Existing benchmarks mostly target high-resource languages and provide limited coverage for low-resource settings, such as Turkish. Moreover, existing studies on Turkish document parsing lack a standardized benchmark that reflects real-world scenarios and document diversity. To address this gap, we introduce OCRTurk, a Turkish document parsing benchmark covering multiple layout elements and document categories at three difficulty levels. OCRTurk consists of 180 Turkish documents drawn from academic articles, theses, slide decks, and non-academic articles. We evaluate seven OCR models on OCRTurk using element-wise metrics. Across difficulty levels, PaddleOCR achieves the strongest overall results, leading most element-wise metrics except figures and attaining the best Normalized Edit Distance scores in easy, medium, and hard subsets. We also observe performance variation by document type: models perform well on non-academic documents, while slideshows become the most challenging.