KnowLab at RadSum23: comparing pre-trained language models in radiology report summarization

Jinge Wu, Daqian Shi, Abul Hasan, Honghan Wu


Abstract
This paper presents our contribution to the RadSum23 shared task organized as part of the BioNLP 2023. We compared state-of-the-art generative language models in generating high-quality summaries from radiology reports. A two-stage fine-tuning approach was introduced for utilizing knowledge learnt from different datasets. We evaluated the performance of our method using a variety of metrics, including BLEU, ROUGE, bertscore, CheXbert, and RadGraph. Our results revealed the potentials of different models in summarizing radiology reports and demonstrated the effectiveness of the two-stage fine-tuning approach. We also discussed the limitations and future directions of our work, highlighting the need for better understanding the architecture design’s effect and optimal way of fine-tuning accordingly in automatic clinical summarizations.
Anthology ID:
2023.bionlp-1.54
Volume:
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
535–540
Language:
URL:
https://aclanthology.org/2023.bionlp-1.54
DOI:
10.18653/v1/2023.bionlp-1.54
Bibkey:
Cite (ACL):
Jinge Wu, Daqian Shi, Abul Hasan, and Honghan Wu. 2023. KnowLab at RadSum23: comparing pre-trained language models in radiology report summarization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 535–540, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
KnowLab at RadSum23: comparing pre-trained language models in radiology report summarization (Wu et al., BioNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bionlp-1.54.pdf
Video:
 https://aclanthology.org/2023.bionlp-1.54.mp4