ELiRF-VRAIN at BioLaySumm: Boosting Lay Summarization Systems Performance with Ranking Models

Vicent Ahuir, Diego Torres, Encarna Segarra, Lluís-F. Hurtado


Abstract
This paper presents our contribution to the BioLaySumm 2024 shared task of the 23rd BioNLP Workshop. The task is to create a lay summary, given a biomedical research article and its technical summary. As the input to the system could be large, a Longformer Encoder-Decoder (LED) has been used. We continuously pre-trained a general domain LED model with biomedical data to adapt it to this specific domain. In the pre-training phase, several pre-training tasks were aggregated to inject linguistic knowledge and increase the abstractivity of the generated summaries. Since the distribution of samples between the two datasets, eLife and PLOS, is unbalanced, we fine-tuned two models: one for eLife and another for PLOS. To increase the quality of the lay summaries of the system, we developed a regression model that helps us rank the summaries generated by the summarization models. This regression model predicts the quality of the summary in three different aspects: Relevance, Readability, and Factuality. We present the results of our models and a study to measure the ranking capabilities of the regression model.
Anthology ID:
2024.bionlp-1.68
Volume:
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Dina Demner-Fushman, Sophia Ananiadou, Makoto Miwa, Kirk Roberts, Junichi Tsujii
Venues:
BioNLP | WS
SIG:
SIGBIOMED
Publisher:
Association for Computational Linguistics
Note:
Pages:
755–761
Language:
URL:
https://aclanthology.org/2024.bionlp-1.68
DOI:
10.18653/v1/2024.bionlp-1.68
Bibkey:
Cite (ACL):
Vicent Ahuir, Diego Torres, Encarna Segarra, and Lluís-F. Hurtado. 2024. ELiRF-VRAIN at BioLaySumm: Boosting Lay Summarization Systems Performance with Ranking Models. In Proceedings of the 23rd Workshop on Biomedical Natural Language Processing, pages 755–761, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
ELiRF-VRAIN at BioLaySumm: Boosting Lay Summarization Systems Performance with Ranking Models (Ahuir et al., BioNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.bionlp-1.68.pdf