QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization

Jean-Benoit Delbrouck, Cassie Zhang, Daniel Rubin


Abstract
This paper describes the solution of the QIAI lab sent to the Radiology Report Summarization (RRS) challenge at MEDIQA 2021. This paper aims to investigate whether using multimodality during training improves the summarizing performances of the model at test-time. Our preliminary results shows that taking advantage of the visual features from the x-rays associated to the radiology reports leads to higher evaluation metrics compared to a text-only baseline system. These improvements are reported according to the automatic evaluation metrics METEOR, BLEU and ROUGE scores. Our experiments can be fully replicated at the following address: https://github.com/jbdel/vilmedic.
Anthology ID:
2021.bionlp-1.33
Volume:
Proceedings of the 20th Workshop on Biomedical Language Processing
Month:
June
Year:
2021
Address:
Online
Editors:
Dina Demner-Fushman, Kevin Bretonnel Cohen, Sophia Ananiadou, Junichi Tsujii
Venue:
BioNLP
SIG:
SIGBIOMED
Publisher:
Association for Computational Linguistics
Note:
Pages:
285–290
Language:
URL:
https://aclanthology.org/2021.bionlp-1.33
DOI:
10.18653/v1/2021.bionlp-1.33
Bibkey:
Cite (ACL):
Jean-Benoit Delbrouck, Cassie Zhang, and Daniel Rubin. 2021. QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 285–290, Online. Association for Computational Linguistics.
Cite (Informal):
QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization (Delbrouck et al., BioNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.bionlp-1.33.pdf
Code
 jbdel/vilmedic