Do LLMs Speak Kazakh? A Pilot Evaluation of Seven Models

Akylbek Maxutov, Ayan Myrzakhmet, Pavel Braslavski


Abstract
We conducted a systematic evaluation of seven large language models (LLMs) on tasks in Kazakh, a Turkic language spoken by approximately 13 million native speakers in Kazakhstan and abroad. We used six datasets corresponding to different tasks – questions answering, causal reasoning, middle school math problems, machine translation, and spelling correction. Three of the datasets were prepared for this study. As expected, the quality of the LLMs on the Kazakh tasks is lower than on the parallel English tasks. GPT-4 shows the best results, followed by Gemini and . In general, LLMs perform better on classification tasks and struggle with generative tasks. Our results provide valuable insights into the applicability of currently available LLMs for Kazakh. We made the data collected for this study publicly available: https://github.com/akylbekmaxutov/LLM-eval-using-Kazakh.
Anthology ID:
2024.sigturk-1.8
Volume:
Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand and Online
Editors:
Duygu Ataman, Mehmet Oguz Derin, Sardana Ivanova, Abdullatif Köksal, Jonne Sälevä, Deniz Zeyrek
Venues:
SIGTURK | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–91
Language:
URL:
https://aclanthology.org/2024.sigturk-1.8
DOI:
Bibkey:
Cite (ACL):
Akylbek Maxutov, Ayan Myrzakhmet, and Pavel Braslavski. 2024. Do LLMs Speak Kazakh? A Pilot Evaluation of Seven Models. In Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024), pages 81–91, Bangkok, Thailand and Online. Association for Computational Linguistics.
Cite (Informal):
Do LLMs Speak Kazakh? A Pilot Evaluation of Seven Models (Maxutov et al., SIGTURK-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.sigturk-1.8.pdf