How Long Is Enough? Exploring the Optimal Intervals of Long-Range Clinical Note Language Modeling

Samuel Cahyawijaya, Bryan Wilie, Holy Lovenia, Huan Zhong, MingQian Zhong, Yuk-Yu Nancy Ip, Pascale Fung


Abstract
Large pre-trained language models (LMs) have been widely adopted in biomedical and clinical domains, introducing many powerful LMs such as bio-lm and BioELECTRA. However, the applicability of these methods to real clinical use cases is hindered, due to the limitation of pre-trained LMs in processing long textual data with thousands of words, which is a common length for a clinical note. In this work, we explore long-range adaptation from such LMs with Longformer, allowing the LMs to capture longer clinical notes context. We conduct experiments on three n2c2 challenges datasets and a longitudinal clinical dataset from Hong Kong Hospital Authority electronic health record (EHR) system to show the effectiveness and generalizability of this concept, achieving ~10% F1-score improvement. Based on our experiments, we conclude that capturing a longer clinical note interval is beneficial to the model performance, but there are different cut-off intervals to achieve the optimal performance for different target variables.
Anthology ID:
2022.louhi-1.19
Volume:
Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Alberto Lavelli, Eben Holderness, Antonio Jimeno Yepes, Anne-Lyse Minard, James Pustejovsky, Fabio Rinaldi
Venue:
Louhi
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
160–172
Language:
URL:
https://aclanthology.org/2022.louhi-1.19
DOI:
10.18653/v1/2022.louhi-1.19
Bibkey:
Cite (ACL):
Samuel Cahyawijaya, Bryan Wilie, Holy Lovenia, Huan Zhong, MingQian Zhong, Yuk-Yu Nancy Ip, and Pascale Fung. 2022. How Long Is Enough? Exploring the Optimal Intervals of Long-Range Clinical Note Language Modeling. In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI), pages 160–172, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
How Long Is Enough? Exploring the Optimal Intervals of Long-Range Clinical Note Language Modeling (Cahyawijaya et al., Louhi 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.louhi-1.19.pdf
Video:
 https://aclanthology.org/2022.louhi-1.19.mp4