NCUEE-NLP at BioLaySumm Task 2: Readability-Controlled Summarization of Biomedical Articles Using the PRIMERA Models

Chao-Yi Chen, Jen-Hao Yang, Lung-Hao Lee


Abstract
This study describes the model design of the NCUEE-NLP system for BioLaySumm Task 2 at the BioNLP 2023 workshop. We separately fine-tune pretrained PRIMERA models to independently generate technical abstracts and lay summaries of biomedical articles. A total of seven evaluation metrics across three criteria were used to compare system performance. Our best submission was ranked first for relevance, second for readability, and fourth for factuality, tying first for overall performance.
Anthology ID:
2023.bionlp-1.62
Volume:
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
586–591
Language:
URL:
https://aclanthology.org/2023.bionlp-1.62
DOI:
10.18653/v1/2023.bionlp-1.62
Bibkey:
Cite (ACL):
Chao-Yi Chen, Jen-Hao Yang, and Lung-Hao Lee. 2023. NCUEE-NLP at BioLaySumm Task 2: Readability-Controlled Summarization of Biomedical Articles Using the PRIMERA Models. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 586–591, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
NCUEE-NLP at BioLaySumm Task 2: Readability-Controlled Summarization of Biomedical Articles Using the PRIMERA Models (Chen et al., BioNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bionlp-1.62.pdf
Video:
 https://aclanthology.org/2023.bionlp-1.62.mp4