2024
pdf
bib
abs
When Raw Data Prevails: Are Large Language Model Embeddings Effective in Numerical Data Representation for Medical Machine Learning Applications?
Yanjun Gao
|
Skatje Myers
|
Shan Chen
|
Dmitriy Dligach
|
Timothy A Miller
|
Danielle Bitterman
|
Matthew Churpek
|
Majid Afshar
Findings of the Association for Computational Linguistics: EMNLP 2024
The introduction of Large Language Models (LLMs) has advanced data representation and analysis, bringing significant progress in their use for medical questions and answering. Despite these advancements, integrating tabular data, especially numerical data pivotal in clinical contexts, into LLM paradigms has not been thoroughly explored. In this study, we examine the effectiveness of vector representations from last hidden states of LLMs for medical diagnostics and prognostics using electronic health record (EHR) data. We compare the performance of these embeddings with that of raw numerical EHR data when used as feature inputs to traditional machine learning (ML) algorithms that excel at tabular data learning, such as eXtreme Gradient Boosting. We focus on instruction-tuned LLMs in a zero-shot setting to represent abnormal physiological data and evaluating their utilities as feature extractors to enhance ML classifiers for predicting diagnoses, length of stay, and mortality. Furthermore, we examine prompt engineering techniques on zero-shot and few-shot LLM embeddings to measure their impact comprehensively. Although findings suggest the raw data features still prevail in medical ML tasks, zero-shot LLM embeddings demonstrate competitive results, suggesting a promising avenue for future research in medical applications.
pdf
bib
abs
Language Models are Surprisingly Fragile to Drug Names in Biomedical Benchmarks
Jack Gallifant
|
Shan Chen
|
Pedro José Ferreira Moreira
|
Nikolaj Munch
|
Mingye Gao
|
Jackson Pond
|
Leo Anthony Celi
|
Hugo Aerts
|
Thomas Hartvigsen
|
Danielle Bitterman
Findings of the Association for Computational Linguistics: EMNLP 2024
Medical knowledge is context-dependent and requires consistent reasoning across various natural language expressions of semantically equivalent phrases. This is particularly crucial for drug names, where patients often use brand names like Advil or Tylenol instead of their generic equivalents. To study this, we create a new robustness dataset, RABBITS, to evaluate performance differences on medical benchmarks after swapping brand and generic drug names using physician expert annotations.We assess both open-source and API-based LLMs on MedQA and MedMCQA, revealing a consistent performance drop ranging from 1-10%. Furthermore, we identify a potential source of this fragility as the contamination of test data in widely used pre-training datasets.
pdf
bib
Proceedings of the 6th Clinical Natural Language Processing Workshop
Tristan Naumann
|
Asma Ben Abacha
|
Steven Bethard
|
Kirk Roberts
|
Danielle Bitterman
Proceedings of the 6th Clinical Natural Language Processing Workshop
2023
pdf
bib
abs
Measuring Pointwise 𝒱-Usable Information In-Context-ly
Sheng Lu
|
Shan Chen
|
Yingya Li
|
Danielle Bitterman
|
Guergana Savova
|
Iryna Gurevych
Findings of the Association for Computational Linguistics: EMNLP 2023
In-context learning (ICL) is a new learning paradigm that has gained popularity along with the development of large language models. In this work, we adapt a recently proposed hardness metric, pointwise 𝒱-usable information (PVI), to an in-context version (in-context PVI). Compared to the original PVI, in-context PVI is more efficient in that it requires only a few exemplars and does not require fine-tuning. We conducted a comprehensive empirical analysis to evaluate the reliability of in-context PVI. Our findings indicate that in-context PVI estimates exhibit similar characteristics to the original PVI. Specific to the in-context setting, we show that in-context PVI estimates remain consistent across different exemplar selections and numbers of shots. The variance of in-context PVI estimates across different exemplar selections is insignificant, which suggests that in-context PVI estimates are stable. Furthermore, we demonstrate how in-context PVI can be employed to identify challenging instances. Our work highlights the potential of in-context PVI and provides new insights into the capabilities of ICL.
2020
pdf
bib
abs
Extracting Relations between Radiotherapy Treatment Details
Danielle Bitterman
|
Timothy Miller
|
David Harris
|
Chen Lin
|
Sean Finan
|
Jeremy Warner
|
Raymond Mak
|
Guergana Savova
Proceedings of the 3rd Clinical Natural Language Processing Workshop
We present work on extraction of radiotherapy treatment information from the clinical narrative in the electronic medical records. Radiotherapy is a central component of the treatment of most solid cancers. Its details are described in non-standardized fashions using jargon not found in other medical specialties, complicating the already difficult task of manual data extraction. We examine the performance of several state-of-the-art neural methods for relation extraction of radiotherapy treatment details, with a goal of automating detailed information extraction. The neural systems perform at 0.82-0.88 macro-average F1, which approximates or in some cases exceeds the inter-annotator agreement. To the best of our knowledge, this is the first effort to develop models for radiotherapy relation extraction and one of the few efforts for relation extraction to describe cancer treatment in general.