SINAI at RadSum23: Radiology Report Summarization Based on Domain-Specific Sequence-To-Sequence Transformer Model

Mariia Chizhikova, Manuel Diaz-Galiano, L. Alfonso Urena-Lopez, M. Teresa Martin-Valdivia


Abstract
This paper covers participation of the SINAI team in the shared task 1B: Radiology Report Summarization at the BioNLP workshop held on ACL 2023. Our proposal follows a sequence-to-sequence approach which leverages pre-trained multilingual general domain and monolingual biomedical domain pre-trained language models. The best performing system based on domain-specific model reached 33.96 F1RadGraph score which is the fourth best result among the challenge participants. This model was made publicly available on HuggingFace. We also describe an attempt of Proximal Policy Optimization Reinforcement Learning that was made in order to improve the factual correctness measured with F1RadGraph but did not lead to satisfactory results.
Anthology ID:
2023.bionlp-1.53
Volume:
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
530–534
Language:
URL:
https://aclanthology.org/2023.bionlp-1.53
DOI:
10.18653/v1/2023.bionlp-1.53
Bibkey:
Cite (ACL):
Mariia Chizhikova, Manuel Diaz-Galiano, L. Alfonso Urena-Lopez, and M. Teresa Martin-Valdivia. 2023. SINAI at RadSum23: Radiology Report Summarization Based on Domain-Specific Sequence-To-Sequence Transformer Model. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 530–534, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
SINAI at RadSum23: Radiology Report Summarization Based on Domain-Specific Sequence-To-Sequence Transformer Model (Chizhikova et al., BioNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bionlp-1.53.pdf