The UNLP 2024 Shared Task on Fine-Tuning Large Language Models for Ukrainian

Mariana Romanyshyn, Oleksiy Syvokon, Roman Kyslyi


Abstract
This paper presents the results of the UNLP 2024 shared task, the first Shared Task on Fine-Tuning Large Language Models for the Ukrainian language. The goal of the task was to facilitate the creation of models that have knowledge of the Ukrainian language, history, and culture, as well as common knowledge, and are capable of generating fluent and accurate responses in Ukrainian. The participants were required to use models with open weights and reasonable size to ensure the reproducibility of the solutions. The participating systems were evaluated using multiple-choice exam questions and manually crafted open questions. Three teams submitted their solutions before the deadline, and two teams submitted papers that were accepted to appear in the UNLP workshop proceedings and are referred to in this report. The Codabench leaderboard is left open for further submissions.
Anthology ID:
2024.unlp-1.9
Volume:
Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Mariana Romanyshyn, Nataliia Romanyshyn, Andrii Hlybovets, Oleksii Ignatenko
Venue:
UNLP
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
67–74
Language:
URL:
https://aclanthology.org/2024.unlp-1.9
DOI:
Bibkey:
Cite (ACL):
Mariana Romanyshyn, Oleksiy Syvokon, and Roman Kyslyi. 2024. The UNLP 2024 Shared Task on Fine-Tuning Large Language Models for Ukrainian. In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024, pages 67–74, Torino, Italia. ELRA and ICCL.
Cite (Informal):
The UNLP 2024 Shared Task on Fine-Tuning Large Language Models for Ukrainian (Romanyshyn et al., UNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.unlp-1.9.pdf