Evaluating the LLM and NMT Models in Translating Low-Resourced Languages

Julita JP Pucinskaite, Ruslan Mitkov


Abstract
Machine translation has significantly advanced due to the development of transformer architecture, which is utilised by many modern deep-learning models. However, low-resource languages, such as Lithuanian, still face challenges stemming from the limited availability of training data and resource constraints. This study examines the translation capabilities of Neural Machine Translation (NMT) models and Large Language Models (LLMs), comparing their performance in low-resource translation tasks. Furthermore, it assesses the impact of parameter scaling and fine-tuning on their effectiveness in enhancing model performance. The evaluation showed that while LLMs demonstrated proficiency in low-resource translation, their results were lower compared to NMT models, which remained consistent across smaller variants. However, as model size increased, the lead was not as prominent, correlating with automatic and human evaluations. The effort to enhance translation accuracy through fine-tuning proved to be an effective strategy, demonstrating improvements in vocabulary expansion and structural coherence in both architectures. These findings highlight the importance of diverse datasets, comprehensive model design, and fine-tuning techniques in addressing the challenges of low-resourced language translation. This project, one of the first studies to focus on the low-resourced Lithuanian language, aims to contribute to the broader discourse and ongoing efforts to enhance accessibility and inclusivity in Natural Language Processing.
Anthology ID:
2025.r2lm-1.13
Volume:
Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Alicia Picazo-Izquierdo, Ernesto Luis Estevanell-Valladares, Ruslan Mitkov, Rafael Muñoz Guillena, Raúl García Cerdá
Venues:
R2LM | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
123–133
Language:
URL:
https://aclanthology.org/2025.r2lm-1.13/
DOI:
Bibkey:
Cite (ACL):
Julita JP Pucinskaite and Ruslan Mitkov. 2025. Evaluating the LLM and NMT Models in Translating Low-Resourced Languages. In Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models, pages 123–133, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Evaluating the LLM and NMT Models in Translating Low-Resourced Languages (Pucinskaite & Mitkov, R2LM 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.r2lm-1.13.pdf