Kenza Bouzid


2024

pdf bib
MAIRA at RRG24: A specialised large multimodal model for radiology report generation
Shaury Srivastav | Mercy Ranjit | Fernando Pérez-García | Kenza Bouzid | Shruthi Bannur | Daniel C. Castro | Anton Schwaighofer | Harshita Sharma | Maximilian Ilse | Valentina Salvatelli | Sam Bond-Taylor | Fabian Falck | Anja Thieme | Hannah Richardson | Matthew P. Lungren | Stephanie L. Hyland | Javier Alvarez-Valle
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing

This paper discusses the participation of the MSR MAIRA team in the Large-Scale Radiology Report Generation Shared Task Challenge, as part of the BioNLP workshop at ACL 2024. We present a radiology-specific multimodal model designed to generate radiological reports from chest X-Rays (CXRs). Our proposed model combines a CXR-specific image encoder RAD-DINO with a Large Language Model (LLM) based on Vicuna-7B, via a multi-layer perceptron (MLP) adapter. Both the adapter and the LLM have been fine-tuned in a single-stage training setup to generate radiology reports. Experimental results indicate that a joint training setup with findings and impression sections improves findings prediction. Additionally, incorporating lateral images alongside frontal images when available further enhances all metrics. More information and resources about MAIRA can be found on the project website: http://aka.ms/maira.

2023

pdf bib
Exploring the Boundaries of GPT-4 in Radiology
Qianchu Liu | Stephanie Hyland | Shruthi Bannur | Kenza Bouzid | Daniel Castro | Maria Wetscherek | Robert Tinn | Harshita Sharma | Fernando Pérez-García | Anton Schwaighofer | Pranav Rajpurkar | Sameer Khanna | Hoifung Poon | Naoto Usuyama | Anja Thieme | Aditya Nori | Matthew Lungren | Ozan Oktay | Javier Alvarez-Valle
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

The recent success of general-domain large language models (LLMs) has significantly changed the natural language processing paradigm towards a unified foundation model across domains and applications. In this paper, we focus on assessing the performance of GPT-4, the most capable LLM so far, on the text-based applications for radiology reports, comparing against state-of-the-art (SOTA) radiology-specific models. Exploring various prompting strategies, we evaluated GPT-4 on a diverse range of common radiology tasks and we found GPT-4 either outperforms or is on par with current SOTA radiology models. With zero-shot prompting, GPT-4 already obtains substantial gains ( 10% absolute improvement) over radiology models in temporal sentence similarity classification (accuracy) and natural language inference (F1). For tasks that require learning dataset-specific style or schema (e.g. findings summarisation), GPT-4 improves with example-based prompting and matches supervised SOTA. Our extensive error analysis with a board-certified radiologist shows GPT-4 has a sufficient level of radiology knowledge with only occasional errors in complex context that require nuanced domain knowledge. For findings summarisation, GPT-4 outputs are found to be overall comparable with existing manually-written impressions.