Jafar Isbarov
2024
Open foundation models for Azerbaijani language
Jafar Isbarov
|
Kavsar Huseynova
|
Elvin Mammadov
|
Mammad Hajili
|
Duygu Ataman
Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)
The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.
Findings of the 2nd Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2024
Francesco Tinner
|
Raghav Mantri
|
Mammad Hajili
|
Chiamaka Chukwuneke
|
Dylan Massey
|
Benjamin A. Ajibade
|
Bilge Deniz Kocak
|
Abolade Dawud
|
Jonathan Atala
|
Hale Sirin
|
Kayode Olaleye
|
Anar Rzayev
|
Jafar Isbarov
|
Dursun Dashdamirov
|
David Adelani
|
Duygu Ataman
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Large language models (LLMs) demonstrate exceptional proficiency in both the comprehension and generation of textual data, particularly in English, a language for which extensive public benchmarks have been established across a wide range of natural language processing (NLP) tasks. Nonetheless, their performance in multilingual contexts and specialized domains remains less rigorously validated, raising questions about their reliability and generalizability across linguistically diverse and domain-specific settings. The second edition of the Shared Task on Multilingual Multitask Information Retrieval aims to provide a comprehensive and inclusive multilingual evaluation benchmark which aids assessing the ability of multilingual LLMs to capture logical, factual, or causal relationships within lengthy text contexts and generate language under sparse settings, particularly in scenarios with under-resourced languages. The shared task consists of two subtasks crucial to information retrieval: Named entity recognition (NER) and reading comprehension (RC), in 7 data-scarce languages: Azerbaijani, Swiss German, Turkish and , which previously lacked annotated resources in information retrieval tasks. This year specifally focus on the multiple-choice question answering evaluation setting which provides a more objective setting for comparing different methods across languages.