LHS712EE at BioLaySumm 2023: Using BART and LED to summarize biomedical research articles

Quancheng Liu, Xiheng Ren, V.G.Vinod Vydiswaran


Abstract
As part of our participation in BioLaySumm 2023, we explored the use of large language models (LLMs) to automatically generate concise and readable summaries of biomedical research articles. We utilized pre-trained LLMs to fine-tune our summarization models on two provided datasets, and adapt them to the shared task within the constraints of training time and computational power. Our final models achieved very high relevance and factuality scores on the test set, and ranked among the top five models in the overall performance.
Anthology ID:
2023.bionlp-1.66
Volume:
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
620–624
Language:
URL:
https://aclanthology.org/2023.bionlp-1.66
DOI:
10.18653/v1/2023.bionlp-1.66
Bibkey:
Cite (ACL):
Quancheng Liu, Xiheng Ren, and V.G.Vinod Vydiswaran. 2023. LHS712EE at BioLaySumm 2023: Using BART and LED to summarize biomedical research articles. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 620–624, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
LHS712EE at BioLaySumm 2023: Using BART and LED to summarize biomedical research articles (Liu et al., BioNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bionlp-1.66.pdf