What is the best recipe for character-level encoder-only modelling?

Kris Cao


Abstract
This paper aims to benchmark recent progress in language understanding models that output contextualised representations at the character level. Many such modelling architectures and methods to train those architectures have been proposed, but it is currently unclear what the relative contributions of the architecture vs. the pretraining objective are to final model performance. We explore the design space of such models, comparing architectural innovations (Clark et al., 2022, Jaegle et al., 2022, Tay et al., 2021) and a variety of different pretraining objectives on a suite of evaluation tasks with a fixed training procedure in order to find the currently optimal way to build and train character-level BERT-like models. We find that our best performing character-level model exceeds the performance of a token-based model trained with the same settings on the same data, suggesting that character-level models are ready for more widespread adoption. Unfortunately, the best method to train character-level models still relies on a subword-level tokeniser during pretraining, and final model performance is highly dependent on tokeniser quality. We believe our results demonstrate the readiness of character-level models for multilingual language representation, and encourage NLP practitioners to try them as drop-in replacements for token-based models.
Anthology ID:
2023.acl-long.326
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5924–5938
Language:
URL:
https://aclanthology.org/2023.acl-long.326
DOI:
10.18653/v1/2023.acl-long.326
Bibkey:
Cite (ACL):
Kris Cao. 2023. What is the best recipe for character-level encoder-only modelling?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5924–5938, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
What is the best recipe for character-level encoder-only modelling? (Cao, ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.326.pdf
Video:
 https://aclanthology.org/2023.acl-long.326.mp4