Pretraining and Benchmarking Modern Encoders for Latvian

Arturs Znotins


Abstract
Encoder-only transformers remain essential for practical NLP tasks. While recent advances in multilingual models have improved cross-lingual capabilities, low-resource languages such as Latvian remain underrepresented in pretraining corpora, and few monolingual Latvian encoders currently exist. We address this gap by pretraining a suite of Latvian-specific encoders based on RoBERTa, DeBERTaV3, and ModernBERT architectures, including long-context variants, and evaluating them on a comprehensive Latvian benchmark suite. Our models are competitive with existing monolingual and multilingual encoders while benefiting from recent architectural and efficiency advances. Our best model, lv-deberta-base (111M parameters), achieves the strongest overall performance, outperforming larger multilingual baselines and prior Latvian-specific encoders. We release all pretrained models and evaluation resources to support further research and practical applications in Latvian NLP.
Anthology ID:
2026.loreslm-1.40
Volume:
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Hansi Hettiarachchi, Tharindu Ranasinghe, Alistair Plum, Paul Rayson, Ruslan Mitkov, Mohamed Gaber, Damith Premasiri, Fiona Anting Tan, Lasitha Uyangodage
Venue:
LoResLM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
461–470
Language:
URL:
https://aclanthology.org/2026.loreslm-1.40/
DOI:
Bibkey:
Cite (ACL):
Arturs Znotins. 2026. Pretraining and Benchmarking Modern Encoders for Latvian. In Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026), pages 461–470, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Pretraining and Benchmarking Modern Encoders for Latvian (Znotins, LoResLM 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.loreslm-1.40.pdf