Effects of sub-word segmentation on performance of transformer language models

Jue Hou, Anisia Katinskaia, Anh-Duc Vu, Roman Yangarber


Abstract
Language modeling is a fundamental task in natural language processing, which has been thoroughly explored with various architectures and hyperparameters. However, few studies focus on the effect of sub-word segmentation on the performance of language models (LMs). In this paper, we compare GPT and BERT models trained with the statistical segmentation algorithm BPE vs. two unsupervised algorithms for morphological segmentation — Morfessor and StateMorph. We train the models for several languages — including ones with very rich morphology — and compare their performance with different segmentation algorithms, vocabulary sizes, and model sizes. The results show that training with morphological segmentation allows the LMs to: (1) achieve lower perplexity, (2) converge more efficiently in terms of training time, and (3) achieve equivalent or better evaluation scores on downstream tasks. Lastly, we show that (4) LMs of smaller size using morphological segmentation can perform comparably to models of larger size trained with BPE — both in terms of (1) perplexity and (3) scores on downstream tasks. Points (2) and (4) impact on sustainability, since they reduce the model cost; and while 2 reduces cost only in the training phase, 4 does so also in the inference phase.
Anthology ID:
2023.emnlp-main.459
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7413–7425
Language:
URL:
https://aclanthology.org/2023.emnlp-main.459
DOI:
10.18653/v1/2023.emnlp-main.459
Bibkey:
Cite (ACL):
Jue Hou, Anisia Katinskaia, Anh-Duc Vu, and Roman Yangarber. 2023. Effects of sub-word segmentation on performance of transformer language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7413–7425, Singapore. Association for Computational Linguistics.
Cite (Informal):
Effects of sub-word segmentation on performance of transformer language models (Hou et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.459.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.459.mp4