Evaluating Pretraining Strategies for Clinical BERT Models

Anastasios Lamproudis, Aron Henriksson, Hercules Dalianis


Abstract
Research suggests that using generic language models in specialized domains may be sub-optimal due to significant domain differences. As a result, various strategies for developing domain-specific language models have been proposed, including techniques for adapting an existing generic language model to the target domain, e.g. through various forms of vocabulary modifications and continued domain-adaptive pretraining with in-domain data. Here, an empirical investigation is carried out in which various strategies for adapting a generic language model to the clinical domain are compared to pretraining a pure clinical language model. Three clinical language models for Swedish, pretrained for up to ten epochs, are fine-tuned and evaluated on several downstream tasks in the clinical domain. A comparison of the language models’ downstream performance over the training epochs is conducted. The results show that the domain-specific language models outperform a general-domain language model; however, there is little difference in performance of the various clinical language models. However, compared to pretraining a pure clinical language model with only in-domain data, leveraging and adapting an existing general-domain language model requires fewer epochs of pretraining with in-domain data.
Anthology ID:
2022.lrec-1.43
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
410–416
Language:
URL:
https://aclanthology.org/2022.lrec-1.43
DOI:
Bibkey:
Cite (ACL):
Anastasios Lamproudis, Aron Henriksson, and Hercules Dalianis. 2022. Evaluating Pretraining Strategies for Clinical BERT Models. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 410–416, Marseille, France. European Language Resources Association.
Cite (Informal):
Evaluating Pretraining Strategies for Clinical BERT Models (Lamproudis et al., LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.43.pdf