MANTa: Efficient Gradient-Based Tokenization for End-to-End Robust Language Modeling

Nathan Godey, Roman Castagné, Éric de la Clergerie, Benoît Sagot


Abstract
Static subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models’ downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pre-trained model on several English datasets from different domains as well as on synthetic noise. We find that MANTa improves robustness to character perturbations and out-of-domain data. We then show that MANTa performs comparably to other models on the general-domain GLUE benchmark. Finally, we show that it is considerably faster than strictly byte-level models.
Anthology ID:
2022.findings-emnlp.207
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2859–2870
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.207
DOI:
10.18653/v1/2022.findings-emnlp.207
Bibkey:
Cite (ACL):
Nathan Godey, Roman Castagné, Éric de la Clergerie, and Benoît Sagot. 2022. MANTa: Efficient Gradient-Based Tokenization for End-to-End Robust Language Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2859–2870, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
MANTa: Efficient Gradient-Based Tokenization for End-to-End Robust Language Modeling (Godey et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.207.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.207.mp4