Word Boundary Information Isn’t Useful for Encoder Language Models

Edward Gow-Smith, Dylan Phelps, Harish Tayyar Madabushi, Carolina Scarton, Aline Villavicencio


Abstract
All existing transformer-based approaches to NLP using subword tokenisation algorithms encode whitespace (word boundary information) through the use of special space symbols (such as ## or _) forming part of tokens. These symbols have been shown to a) lead to reduced morphological validity of tokenisations, and b) give substantial vocabulary redundancy. As such, removing these symbols has been shown to have a beneficial effect on the processing of morphologically complex words for transformer encoders in the pretrain-finetune paradigm. In this work, we explore whether word boundary information is at all useful to such models. In particular, we train transformer encoders across four different training scales, and investigate several alternative approaches to including word boundary information, evaluating on two languages (English and Finnish) with a range of tasks across different domains and problem set-ups: sentence classification datasets, NER (for token-level classification), and two classification datasets involving complex words (Superbizarre and FLOTA). Overall, through an extensive experimental setup that includes the pre-training of 35 models, we find no substantial improvements from our alternative approaches, suggesting that modifying tokenisers to remove word boundary information isn’t leading to a loss of useful information.
Anthology ID:
2024.repl4nlp-1.10
Volume:
Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Chen Zhao, Marius Mosbach, Pepa Atanasova, Seraphina Goldfarb-Tarrent, Peter Hase, Arian Hosseini, Maha Elbayad, Sandro Pezzelle, Maximilian Mozes
Venues:
RepL4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
118–135
Language:
URL:
https://aclanthology.org/2024.repl4nlp-1.10
DOI:
Bibkey:
Cite (ACL):
Edward Gow-Smith, Dylan Phelps, Harish Tayyar Madabushi, Carolina Scarton, and Aline Villavicencio. 2024. Word Boundary Information Isn’t Useful for Encoder Language Models. In Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024), pages 118–135, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Word Boundary Information Isn’t Useful for Encoder Language Models (Gow-Smith et al., RepL4NLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.repl4nlp-1.10.pdf