Unsupervised Tokenization Learning

Anton Kolonin, Vignav Ramesh


Abstract
In the presented study, we discover that the so-called “transition freedom” metric appears superior for unsupervised tokenization purposes in comparison to statistical metrics such as mutual information and conditional probability, providing F-measure scores in range from 0.71 to 1.0 across explored multilingual corpora. We find that different languages require different offshoots of that metric (such as derivative, variance, and “peak values”) for successful tokenization. Larger training corpora do not necessarily result in better tokenization quality, while compressing the models by eliminating statistically weak evidence tends to improve performance. The proposed unsupervised tokenization technique provides quality better than or comparable to lexicon-based ones, depending on the language.
Anthology ID:
2022.emnlp-main.239
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3649–3664
Language:
URL:
https://aclanthology.org/2022.emnlp-main.239
DOI:
10.18653/v1/2022.emnlp-main.239
Bibkey:
Cite (ACL):
Anton Kolonin and Vignav Ramesh. 2022. Unsupervised Tokenization Learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3649–3664, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Tokenization Learning (Kolonin & Ramesh, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.239.pdf