Daniel Castro
2023
Exploring the Boundaries of GPT-4 in Radiology
Qianchu Liu
|
Stephanie Hyland
|
Shruthi Bannur
|
Kenza Bouzid
|
Daniel Castro
|
Maria Wetscherek
|
Robert Tinn
|
Harshita Sharma
|
Fernando Pérez-García
|
Anton Schwaighofer
|
Pranav Rajpurkar
|
Sameer Khanna
|
Hoifung Poon
|
Naoto Usuyama
|
Anja Thieme
|
Aditya Nori
|
Matthew Lungren
|
Ozan Oktay
|
Javier Alvarez-Valle
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
The recent success of general-domain large language models (LLMs) has significantly changed the natural language processing paradigm towards a unified foundation model across domains and applications. In this paper, we focus on assessing the performance of GPT-4, the most capable LLM so far, on the text-based applications for radiology reports, comparing against state-of-the-art (SOTA) radiology-specific models. Exploring various prompting strategies, we evaluated GPT-4 on a diverse range of common radiology tasks and we found GPT-4 either outperforms or is on par with current SOTA radiology models. With zero-shot prompting, GPT-4 already obtains substantial gains (≈ 10% absolute improvement) over radiology models in temporal sentence similarity classification (accuracy) and natural language inference (F1). For tasks that require learning dataset-specific style or schema (e.g. findings summarisation), GPT-4 improves with example-based prompting and matches supervised SOTA. Our extensive error analysis with a board-certified radiologist shows GPT-4 has a sufficient level of radiology knowledge with only occasional errors in complex context that require nuanced domain knowledge. For findings summarisation, GPT-4 outputs are found to be overall comparable with existing manually-written impressions.
Search
Co-authors
- Qianchu Liu 1
- Stephanie Hyland 1
- Shruthi Bannur 1
- Kenza Bouzid 1
- Maria Wetscherek 1
- show all...