Matthew Lungren
2023
Exploring the Boundaries of GPT-4 in Radiology
Qianchu Liu
|
Stephanie Hyland
|
Shruthi Bannur
|
Kenza Bouzid
|
Daniel Castro
|
Maria Wetscherek
|
Robert Tinn
|
Harshita Sharma
|
Fernando Pérez-García
|
Anton Schwaighofer
|
Pranav Rajpurkar
|
Sameer Khanna
|
Hoifung Poon
|
Naoto Usuyama
|
Anja Thieme
|
Aditya Nori
|
Matthew Lungren
|
Ozan Oktay
|
Javier Alvarez-Valle
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
The recent success of general-domain large language models (LLMs) has significantly changed the natural language processing paradigm towards a unified foundation model across domains and applications. In this paper, we focus on assessing the performance of GPT-4, the most capable LLM so far, on the text-based applications for radiology reports, comparing against state-of-the-art (SOTA) radiology-specific models. Exploring various prompting strategies, we evaluated GPT-4 on a diverse range of common radiology tasks and we found GPT-4 either outperforms or is on par with current SOTA radiology models. With zero-shot prompting, GPT-4 already obtains substantial gains (≈ 10% absolute improvement) over radiology models in temporal sentence similarity classification (accuracy) and natural language inference (F1). For tasks that require learning dataset-specific style or schema (e.g. findings summarisation), GPT-4 improves with example-based prompting and matches supervised SOTA. Our extensive error analysis with a board-certified radiologist shows GPT-4 has a sufficient level of radiology knowledge with only occasional errors in complex context that require nuanced domain knowledge. For findings summarisation, GPT-4 outputs are found to be overall comparable with existing manually-written impressions.
2020
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Akshay Smit
|
Saahil Jain
|
Pranav Rajpurkar
|
Anuj Pareek
|
Andrew Ng
|
Matthew Lungren
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The extraction of labels from radiology text reports enables large-scale training of medical imaging models. Existing approaches to report labeling typically rely either on sophisticated feature engineering based on medical domain knowledge or manual annotations by experts. In this work, we introduce a BERT-based approach to medical image report labeling that exploits both the scale of available rule-based systems and the quality of expert annotations. We demonstrate superior performance of a biomedically pretrained BERT model first trained on annotations of a rule-based labeler and then finetuned on a small set of expert annotations augmented with automated backtranslation. We find that our final model, CheXbert, is able to outperform the previous best rules-based labeler with statistical significance, setting a new SOTA for report labeling on one of the largest datasets of chest x-rays.
Search
Co-authors
- Pranav Rajpurkar 2
- Qianchu Liu 1
- Stephanie Hyland 1
- Shruthi Bannur 1
- Kenza Bouzid 1
- show all...