Nazgol Tavabi
2023
Intermediate Domain Finetuning for Weakly Supervised Domain-adaptive Clinical NER
Shilpa Suresh
|
Nazgol Tavabi
|
Shahriar Golchin
|
Leah Gilreath
|
Rafael Garcia-Andujar
|
Alexander Kim
|
Joseph Murray
|
Blake Bacevich
|
Ata Kiapour
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Accurate human-annotated data for real-worlduse cases can be scarce and expensive to obtain. In the clinical domain, obtaining such data is evenmore difficult due to privacy concerns which notonly restrict open access to quality data but also require that the annotation be done by domain experts. In this paper, we propose a novel framework - InterDAPT - that leverages Intermediate Domain Finetuning to allow language models to adapt to narrow domains with small, noisy datasets. By making use of peripherally-related, unlabeled datasets,this framework circumvents domain-specific datascarcity issues. Our results show that this weaklysupervised framework provides performance improvements in downstream clinical named entityrecognition tasks.
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Shahriar Golchin
|
Mihai Surdeanu
|
Nazgol Tavabi
|
Ata Kiapour
Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)
We propose a novel task-agnostic in-domain pre-training method that sits between generic pre-training and fine-tuning. Our approach selectively masks in-domain keywords, i.e., words that provide a compact representation of the target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We evaluate our approach using six different settings: three datasets combined with two distinct pre-trained language models (PLMs). Our results reveal that the fine-tuned PLMs adapted using our in-domain pre-training strategy outperform PLMs that used in-domain pre-training with random masking as well as those that followed the common pre-train-then-fine-tune paradigm. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15% of the pre-training time (for two epochs) for BERT Large (Devlin et al., 2019).
Search
Co-authors
- Shahriar Golchin 2
- Ata Kiapour 2
- Shilpa Suresh 1
- Leah Gilreath 1
- Rafael Garcia-Andujar 1
- show all...