Open foundation models for Azerbaijani language

Jafar Isbarov, Kavsar Huseynova, Elvin Mammadov, Mammad Hajili, Duygu Ataman


Abstract
The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.
Anthology ID:
2024.sigturk-1.2
Original:
2024.sigturk-1.2v1
Version 2:
2024.sigturk-1.2v2
Volume:
Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand and Online
Editors:
Duygu Ataman, Mehmet Oguz Derin, Sardana Ivanova, Abdullatif Köksal, Jonne Sälevä, Deniz Zeyrek
Venues:
SIGTURK | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18–28
Language:
URL:
https://aclanthology.org/2024.sigturk-1.2
DOI:
Bibkey:
Cite (ACL):
Jafar Isbarov, Kavsar Huseynova, Elvin Mammadov, Mammad Hajili, and Duygu Ataman. 2024. Open foundation models for Azerbaijani language. In Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024), pages 18–28, Bangkok, Thailand and Online. Association for Computational Linguistics.
Cite (Informal):
Open foundation models for Azerbaijani language (Isbarov et al., SIGTURK-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.sigturk-1.2.pdf