Biomedical Language Models are Robust to Sub-optimal Tokenization

Bernal Jimenez Gutierrez, Huan Sun, Yu Su


Abstract
As opposed to general English, many concepts in biomedical terminology have been designed in recent history by biomedical professionals with the goal of being precise and concise. This is often achieved by concatenating meaningful biomedical morphemes to create new semantic units. Nevertheless, most modern biomedical language models (LMs) are pre-trained using standard domain-specific tokenizers derived from large scale biomedical corpus statistics without explicitly leveraging the agglutinating nature of biomedical language. In this work, we first find that standard open-domain and biomedical tokenizers are largely unable to segment biomedical terms into meaningful components. Therefore, we hypothesize that using a tokenizer which segments biomedical terminology more accurately would enable biomedical LMs to improve their performance on downstream biomedical NLP tasks, especially ones which involve biomedical terms directly such as named entity recognition (NER) and entity linking. Surprisingly, we find that pre-training a biomedical LM using a more accurate biomedical tokenizer does not improve the entity representation quality of a language model as measured by several intrinsic and extrinsic measures such as masked language modeling prediction (MLM) accuracy as well as NER and entity linking performance. These quantitative findings, along with a case study which explores entity representation quality more directly, suggest that the biomedical pre-training process is quite robust to instances of sub-optimal tokenization.
Anthology ID:
2023.bionlp-1.32
Volume:
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
350–362
Language:
URL:
https://aclanthology.org/2023.bionlp-1.32
DOI:
10.18653/v1/2023.bionlp-1.32
Bibkey:
Cite (ACL):
Bernal Jimenez Gutierrez, Huan Sun, and Yu Su. 2023. Biomedical Language Models are Robust to Sub-optimal Tokenization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 350–362, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Biomedical Language Models are Robust to Sub-optimal Tokenization (Jimenez Gutierrez et al., BioNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bionlp-1.32.pdf
Video:
 https://aclanthology.org/2023.bionlp-1.32.mp4