Team YXZ at BioLaySumm: Adapting Large Language Models for Biomedical Lay Summarization

Jieli Zhou, Cheng Ye, Pengcheng Xu, Hongyi Xin


Abstract
Biomedical literature are crucial for disseminating new scientific findings. However, the complexity of these research articles often leads to misinterpretations by the public. To address this urgent issue, we participated in the BioLaySumm task at the 2024 ACL BioNLP workshop, which focuses on automatically simplifying technical biomedical articles for non-technical audiences. We conduct a systematic evaluation of the SOTA large language models (LLMs) in 2024 and found that LLMs can generally achieve better readability scores than smaller models like Bart. Then we iteratively developed techniques of title infusing, K-shot prompting , LLM rewriting and instruction finetuning to further boost readability while balancing factuality and relevance. Notably, our submission achieved the first place in readability at the workshop, and among the top-3 teams with the highest readability scores, we have the best overall rank. Here, we present our experiments and findings on how to effectively adapt LLMs for automatic lay summarization. Our code is available at https://github.com/zhoujieli/biolaysumm.
Anthology ID:
2024.bionlp-1.76
Volume:
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Dina Demner-Fushman, Sophia Ananiadou, Makoto Miwa, Kirk Roberts, Junichi Tsujii
Venues:
BioNLP | WS
SIG:
SIGBIOMED
Publisher:
Association for Computational Linguistics
Note:
Pages:
818–825
Language:
URL:
https://aclanthology.org/2024.bionlp-1.76
DOI:
Bibkey:
Cite (ACL):
Jieli Zhou, Cheng Ye, Pengcheng Xu, and Hongyi Xin. 2024. Team YXZ at BioLaySumm: Adapting Large Language Models for Biomedical Lay Summarization. In Proceedings of the 23rd Workshop on Biomedical Natural Language Processing, pages 818–825, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Team YXZ at BioLaySumm: Adapting Large Language Models for Biomedical Lay Summarization (Zhou et al., BioNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.bionlp-1.76.pdf