MedREQAL: Examining Medical Knowledge Recall of Large Language Models via Question Answering

Juraj Vladika, Phillip Schneider, Florian Matthes


Abstract
In recent years, Large Language Models (LLMs) have demonstrated an impressive ability to encode knowledge during pre-training on large text corpora. They can leverage this knowledge for downstream tasks like question answering (QA), even in complex areas involving health topics. Considering their high potential for facilitating clinical work in the future, understanding the quality of encoded medical knowledge and its recall in LLMs is an important step forward. In this study, we examine the capability of LLMs to exhibit medical knowledge recall by constructing a novel dataset derived from systematic reviews – studies synthesizing evidence-based answers for specific medical questions. Through experiments on the new MedREQAL dataset, comprising question-answer pairs extracted from rigorous systematic reviews, we assess six LLMs, such as GPT and Mixtral, analyzing their classification and generation performance. Our experimental insights into LLM performance on the novel biomedical QA dataset reveal the still challenging nature of this task.
Anthology ID:
2024.findings-acl.860
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14459–14469
Language:
URL:
https://aclanthology.org/2024.findings-acl.860
DOI:
Bibkey:
Cite (ACL):
Juraj Vladika, Phillip Schneider, and Florian Matthes. 2024. MedREQAL: Examining Medical Knowledge Recall of Large Language Models via Question Answering. In Findings of the Association for Computational Linguistics ACL 2024, pages 14459–14469, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
MedREQAL: Examining Medical Knowledge Recall of Large Language Models via Question Answering (Vladika et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.860.pdf