When Every Token Counts: Optimal Segmentation for Low-Resource Language Models

Vikrant Dewangan, Bharath Raj S, Garvit Suri, Raghav Sonavane


Abstract
Traditional greedy tokenization methods have been a critical step in Natural Language Processing (NLP), influencing how text is converted into tokens and directly impacting model performance. While subword tokenizers like Byte-Pair Encoding (BPE) are widely used, questions remain about their optimality across model scales and languages. In this work, we demonstrate through extensive experiments that an optimal BPE configuration significantly reduces token count compared to greedy segmentation, yielding improvements in token-saving percentages and performance benefits, particularly for smaller models. We evaluate tokenization performance across various intrinsic and extrinsic tasks, including generation and classification. Our findings suggest that compression-optimized tokenization strategies could provide substantial advantages for multilingual and low-resource (LR) language applications, highlighting a promising direction for further research and inclusive NLP.
Anthology ID:
2025.loreslm-1.24
Volume:
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Month:
January
Year:
2025
Address:
Abu Dhabi, United Arab Emirates
Editors:
Hansi Hettiarachchi, Tharindu Ranasinghe, Paul Rayson, Ruslan Mitkov, Mohamed Gaber, Damith Premasiri, Fiona Anting Tan, Lasitha Uyangodage
Venues:
LoResLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
294–308
Language:
URL:
https://aclanthology.org/2025.loreslm-1.24/
DOI:
Bibkey:
Cite (ACL):
Vikrant Dewangan, Bharath Raj S, Garvit Suri, and Raghav Sonavane. 2025. When Every Token Counts: Optimal Segmentation for Low-Resource Language Models. In Proceedings of the First Workshop on Language Models for Low-Resource Languages, pages 294–308, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
When Every Token Counts: Optimal Segmentation for Low-Resource Language Models (Dewangan et al., LoResLM 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.loreslm-1.24.pdf