Dragos Georgian Corlatescu
2024
“Vorbești Românește?” A Recipe to Train Powerful Romanian LLMs with English Instructions
Mihai Masala
|
Denis Ilie-Ablachim
|
Alexandru Dima
|
Dragos Georgian Corlatescu
|
Miruna-Andreea Zavelca
|
Ovio Olaru
|
Simina-Maria Terian
|
Andrei Terian
|
Marius Leordeanu
|
Horia Velicu
|
Marius Popescu
|
Mihai Dascalu
|
Traian Rebedea
Findings of the Association for Computational Linguistics: EMNLP 2024
In recent years, Large Language Models (LLMs) have achieved almost human-like performance on various tasks. While some LLMs have been trained on multilingual data, most of the training data is in English; hence, their performance in English greatly exceeds other languages. To our knowledge, we are the first to collect and translate a large collection of texts, instructions, and benchmarks and train, evaluate, and release open-source LLMs tailored for Romanian. We evaluate our methods on four different categories, including academic benchmarks, MT-Bench (manually translated), and a professionally built historical, cultural, and social benchmark adapted to Romanian. We argue for the usefulness and high performance of RoLLMs by obtaining state-of-the-art results across the board. We publicly release all resources (i.e., data, training and evaluation code, models) with the goal of supporting and encouraging research on Romanian LLMs while concurrently creating a generalizable recipe adequate for other low or less-resourced languages.
Search
Co-authors
- Mihai Masala 1
- Denis Ilie-Ablachim 1
- Alexandru Dima 1
- Miruna-Andreea Zavelca 1
- Ovio Olaru 1
- show all...