Justin Xu


2024

pdf bib
GREEN: Generative Radiology Report Evaluation and Error Notation
Sophie Ostmeier | Justin Xu | Zhihong Chen | Maya Varma | Louis Blankemeier | Christian Bluethgen | Arne Edward Michalson Md | Michael Moseley | Curtis Langlotz | Akshay S Chaudhari | Jean-Benoit Delbrouck
Findings of the Association for Computational Linguistics: EMNLP 2024

Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to its medical nature. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches.

pdf bib
Overview of the First Shared Task on Clinical Text Generation: RRG24 and “Discharge Me!”
Justin Xu | Zhihong Chen | Andrew Johnston | Louis Blankemeier | Maya Varma | Jason Hom | William J. Collins | Ankit Modi | Robert Lloyd | Benjamin Hopkins | Curtis Langlotz | Jean-Benoit Delbrouck
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing

Recent developments in natural language generation have tremendous implications for healthcare. For instance, state-of-the-art systems could automate the generation of sections in clinical reports to alleviate physician workload and streamline hospital documentation. To explore these applications, we present a shared task consisting of two subtasks: (1) Radiology Report Generation (RRG24) and (2) Discharge Summary Generation (“Discharge Me!”). RRG24 involves generating the ‘Findings’ and ‘Impression’ sections of radiology reports given chest X-rays. “Discharge Me!” involves generating the ‘Brief Hospital Course’ and '‘Discharge Instructions’ sections of discharge summaries for patients admitted through the emergency department. “Discharge Me!” submissions were subsequently reviewed by a team of clinicians. Both tasks emphasize the goal of reducing clinician burnout and repetitive workloads by generating documentation. We received 201 submissions from across 8 teams for RRG24, and 211 submissions from across 16 teams for “Discharge Me!”.