When Multilingual Evaluation Assumptions Fail: Tokenization Effects Across Scripts

Manodyna K H, Luc De Nardi


Abstract
Multilingual evaluation often relies on language coverage or translated benchmarks, implicitly assuming that subword tokenization behaves comparably across scripts. In mixed-script settings, this assumption breaks down. We examine this effect using polarity detection as a case study, comparing Orthographic Syllable Pair Encoding (OSPE) and Byte Pair Encoding (BPE) under identical architectures, data, and training conditions on SemEval Task 9, which spans Devanagari, Perso-Arabic, and Latin scripts. OSPE is applied to Hindi, Nepali, Urdu, and Arabic, while BPE is retained for English. We find that BPE systematically underestimates performance in abugida and abjad scripts, producing fragmented representations, unstable optimization, and drops of up to 27 macro-F1 points for Nepali, while English remains largely unaffected. Script-aware segmentation preserves orthographic structure, stabilizes training, and improves cross-language comparability without additional data or model scaling, highlighting tokenization as a latent but consequential evaluation decision in multilingual benchmarks.
Anthology ID:
2026.loreslm-1.4
Volume:
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Hansi Hettiarachchi, Tharindu Ranasinghe, Alistair Plum, Paul Rayson, Ruslan Mitkov, Mohamed Gaber, Damith Premasiri, Fiona Anting Tan, Lasitha Uyangodage
Venue:
LoResLM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
41–49
Language:
URL:
https://aclanthology.org/2026.loreslm-1.4/
DOI:
Bibkey:
Cite (ACL):
Manodyna K H and Luc De Nardi. 2026. When Multilingual Evaluation Assumptions Fail: Tokenization Effects Across Scripts. In Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026), pages 41–49, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
When Multilingual Evaluation Assumptions Fail: Tokenization Effects Across Scripts (H & De Nardi, LoResLM 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.loreslm-1.4.pdf