HashFormers: Towards Vocabulary-independent Pre-trained Transformers

Huiyin Xue, Nikolaos Aletras


Abstract
Transformer-based pre-trained language models are vocabulary-dependent, mapping by default each token to its corresponding embedding. This one-to-one mapping results into embedding matrices that occupy a lot of memory (i.e. millions of parameters) and grow linearly with the size of the vocabulary. Previous work on on-device transformers dynamically generate token embeddings on-the-fly without embedding matrices using locality-sensitive hashing over morphological information. These embeddings are subsequently fed into transformer layers for text classification. However, these methods are not pre-trained. Inspired by this line of work, we propose HashFormers, a new family of vocabulary-independent pre-trained transformers that support an unlimited vocabulary (i.e. all possible tokens in a corpus) given a substantially smaller fixed-sized embedding matrix. We achieve this by first introducing computationally cheap hashing functions that bucket together individual tokens to embeddings. We also propose three variants that do not require an embedding matrix at all, further reducing the memory requirements. We empirically demonstrate that HashFormers are more memory efficient compared to standard pre-trained transformers while achieving comparable predictive performance when fine-tuned on multiple text classification tasks. For example, our most efficient HashFormer variant has a negligible performance degradation (0.4% on GLUE) using only 99.1K parameters for representing the embeddings compared to 12.3-38M parameters of state-of-the-art models.
Anthology ID:
2022.emnlp-main.536
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7862–7874
Language:
URL:
https://aclanthology.org/2022.emnlp-main.536
DOI:
10.18653/v1/2022.emnlp-main.536
Bibkey:
Cite (ACL):
Huiyin Xue and Nikolaos Aletras. 2022. HashFormers: Towards Vocabulary-independent Pre-trained Transformers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7862–7874, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
HashFormers: Towards Vocabulary-independent Pre-trained Transformers (Xue & Aletras, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.536.pdf