LiBERTa: Advancing Ukrainian Language Modeling through Pre-training from Scratch

Mykola Haltiuk, Aleksander Smywiński-Pohl


Abstract
Recent advancements in Natural Language Processing (NLP) have spurred remarkable progress in language modeling, predominantly benefiting English. While Ukrainian NLP has long grappled with significant challenges due to limited data and computational resources, recent years have seen a shift with the emergence of new corpora, marking a pivotal moment in addressing these obstacles. This paper introduces LiBERTa Large, the inaugural BERT Large model pre-trained entirely from scratch only on Ukrainian texts. Leveraging extensive multilingual text corpora, including a substantial Ukrainian subset, LiBERTa Large establishes a foundational resource for Ukrainian NLU tasks. Our model outperforms existing multilingual and monolingual models pre-trained from scratch for Ukrainian, demonstrating competitive performance against those relying on cross-lingual transfer from English. This achievement underscores our ability to achieve superior performance through pre-training from scratch with additional enhancements, obviating the need to rely on decisions made for English models to efficiently transfer weights. We establish LiBERTa Large as a robust baseline, paving the way for future advancements in Ukrainian language modeling.
Anthology ID:
2024.unlp-1.14
Volume:
Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Mariana Romanyshyn, Nataliia Romanyshyn, Andrii Hlybovets, Oleksii Ignatenko
Venue:
UNLP
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
120–128
Language:
URL:
https://aclanthology.org/2024.unlp-1.14
DOI:
Bibkey:
Cite (ACL):
Mykola Haltiuk and Aleksander Smywiński-Pohl. 2024. LiBERTa: Advancing Ukrainian Language Modeling through Pre-training from Scratch. In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024, pages 120–128, Torino, Italia. ELRA and ICCL.
Cite (Informal):
LiBERTa: Advancing Ukrainian Language Modeling through Pre-training from Scratch (Haltiuk & Smywiński-Pohl, UNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.unlp-1.14.pdf