Efficient Domain Adaptation of Language Models via Adaptive Tokenization

Vin Sachidananda, Jason Kessler, Yi-An Lai


Abstract
Contextual embedding-based language models trained on large data sets, such as BERT and RoBERTa, provide strong performance across a wide range of tasks and are ubiquitous in modern NLP. It has been observed that fine-tuning these models on tasks involving data from domains different from that on which they were pretrained can lead to suboptimal performance. Recent work has explored approaches to adapt pretrained language models to new domains by incorporating additional pretraining on domain-specific corpora and task data. We propose an alternative approach for transferring pretrained language models to new domains by adapting their tokenizers. We show that domain-specific subword sequences can be determined efficiently directly from divergences in the conditional token distributions of the base and domain-specific corpora. In datasets from four disparate domains, we find adaptive tokenization on a pretrained RoBERTa model provides greater than 85% of the performance benefits of domain specific pretraining. Our approach produces smaller models and less training and inference time than other approaches using tokenizer augmentation. Although using adaptive tokenization incurs a 6% increase in model parameters (due to the introduction of 10k new domain-specific tokens), our approach, using 64 CPUs, is >72x faster than further pretraining the language model on domain-specific corpora on 8 TPUs.
Anthology ID:
2021.sustainlp-1.16
Volume:
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2021
Address:
Virtual
Editors:
Nafise Sadat Moosavi, Iryna Gurevych, Angela Fan, Thomas Wolf, Yufang Hou, Ana Marasović, Sujith Ravi
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
155–165
Language:
URL:
https://aclanthology.org/2021.sustainlp-1.16
DOI:
10.18653/v1/2021.sustainlp-1.16
Bibkey:
Cite (ACL):
Vin Sachidananda, Jason Kessler, and Yi-An Lai. 2021. Efficient Domain Adaptation of Language Models via Adaptive Tokenization. In Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, pages 155–165, Virtual. Association for Computational Linguistics.
Cite (Informal):
Efficient Domain Adaptation of Language Models via Adaptive Tokenization (Sachidananda et al., sustainlp 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.sustainlp-1.16.pdf
Video:
 https://aclanthology.org/2021.sustainlp-1.16.mp4
Data
IMDb Movie ReviewsS2ORCSciERC