Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER

Micheal Abaho, Danushka Bollegala, Gary Leeming, Dan Joyce, Iain Buchan


Abstract
Adapting language models (LMs) to novel domains is often achieved through fine-tuning a pre-trained LM (PLM) on domain-specific data. Fine-tuning introduces new knowledge into an LM, enabling it to comprehend and efficiently perform a target domain task. Fine-tuning can however be inadvertently insensitive if it ignores the wide array of disparities (e.g in word meaning) between source and target domains. For instance, words such as chronic and pressure may be treated lightly in social conversations, however, clinically, these words are usually an expression of concern. To address insensitive fine-tuning, we propose Mask Specific Language Modeling (MSLM), an approach that efficiently acquires target domain knowledge by appropriately weighting the importance of domain-specific terms (DS-terms) during fine-tuning. MSLM jointly masks DS-terms and generic words, then learns mask-specific losses by ensuring LMs incur larger penalties for inaccurately predicting DS-terms compared to generic words. Results of our analysis show that MSLM improves LMs sensitivity and detection of DS-terms. We empirically show that an optimal masking rate not only depends on the LM, but also on the dataset and the length of sequences. Our proposed masking strategy outperforms advanced masking strategies such as span- and PMI-based masking.
Anthology ID:
2024.naacl-long.280
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5013–5029
Language:
URL:
https://aclanthology.org/2024.naacl-long.280
DOI:
Bibkey:
Cite (ACL):
Micheal Abaho, Danushka Bollegala, Gary Leeming, Dan Joyce, and Iain Buchan. 2024. Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 5013–5029, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER (Abaho et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.280.pdf
Copyright:
 2024.naacl-long.280.copyright.pdf