MonoByte: A Pool of Monolingual Byte-level Language Models

Hugo Abonizio, Leandro Rodrigues de Souza, Roberto Lotufo, Rodrigo Nogueira


Abstract
The zero-shot cross-lingual ability of models pretrained on multilingual and even monolingual corpora has spurred many hypotheses to explain this intriguing empirical result. However, due to the costs of pretraining, most research uses public models whose pretraining methodology, such as the choice of tokenization, corpus size, and computational budget, might differ drastically. When researchers pretrain their own models, they often do so under a constrained budget, and the resulting models might underperform significantly compared to SOTA models. These experimental differences led to various inconsistent conclusions about the nature of the cross-lingual ability of these models. To help further research on the topic, we released 10 monolingual byte-level models rigorously pretrained under the same configuration with a large compute budget (equivalent to 420 days on a V100) and corpora that are 4 times larger than the original BERT’s. Because they are tokenizer-free, the problem of unseen token embeddings is eliminated, thus allowing researchers to try a wider range of cross-lingual experiments in languages with different scripts. Additionally, we release two models pretrained on non-natural language texts that can be used in sanity-check experiments. Experiments on QA and NLI tasks show that our monolingual models achieve competitive performance to the multilingual one, and hence can be served to strengthen our understanding of cross-lingual transferability in language models.
Anthology ID:
2022.coling-1.309
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3506–3513
Language:
URL:
https://aclanthology.org/2022.coling-1.309
DOI:
Bibkey:
Cite (ACL):
Hugo Abonizio, Leandro Rodrigues de Souza, Roberto Lotufo, and Rodrigo Nogueira. 2022. MonoByte: A Pool of Monolingual Byte-level Language Models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3506–3513, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
MonoByte: A Pool of Monolingual Byte-level Language Models (Abonizio et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.309.pdf
Code
 lersouza/lang-agnostic
Data
TyDiQAXNLImC4