Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification?

Aman Sinha, Timothee Mickus, Marianne Clausel, Mathieu Constant, Xavier Coubez


Abstract
The success of pretrained language models (PLMs) across a spate of use-cases has led to significant investment from the NLP community towards building domain-specific foundational models. On the other hand, in mission critical settings such as biomedical applications, other aspects also factor in—chief of which is a model’s ability to produce reasonable estimates of its own uncertainty. In the present study, we discuss these two desiderata through the lens of how they shape the entropy of a model’s output probability distribution. We find that domain specificity and uncertainty awareness can often be successfully combined, but the exact task at hand weighs in much more strongly.
Anthology ID:
2024.bionlp-1.16
Volume:
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Dina Demner-Fushman, Sophia Ananiadou, Makoto Miwa, Kirk Roberts, Junichi Tsujii
Venues:
BioNLP | WS
SIG:
SIGBIOMED
Publisher:
Association for Computational Linguistics
Note:
Pages:
202–211
Language:
URL:
https://aclanthology.org/2024.bionlp-1.16
DOI:
Bibkey:
Cite (ACL):
Aman Sinha, Timothee Mickus, Marianne Clausel, Mathieu Constant, and Xavier Coubez. 2024. Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification?. In Proceedings of the 23rd Workshop on Biomedical Natural Language Processing, pages 202–211, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification? (Sinha et al., BioNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.bionlp-1.16.pdf