ReMAG-KR: Retrieval and Medically Assisted Generation with Knowledge Reduction for Medical Question Answering

Sidhaarth Murali, Sowmya S., Supreetha R


Abstract
Large Language Models (LLMs) have significant potential for facilitating intelligent end-user applications in healthcare. However, hallucinations remain an inherent problem with LLMs, making it crucial to address this issue with extensive medical knowledge and data. In this work, we propose a Retrieve-and-Medically-Augmented-Generation with Knowledge Reduction (ReMAG-KR) pipeline, employing a carefully curated knowledge base using cross-encoder re-ranking strategies. The pipeline is tested on medical MCQ-based QA datasets as well as general QA datasets. It was observed that when the knowledge base is reduced, the model’s performance decreases by 2-8%, while the inference time improves by 47%.
Anthology ID:
2024.acl-srw.13
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Xiyan Fu, Eve Fleisig
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
140–145
Language:
URL:
https://aclanthology.org/2024.acl-srw.13
DOI:
Bibkey:
Cite (ACL):
Sidhaarth Murali, Sowmya S., and Supreetha R. 2024. ReMAG-KR: Retrieval and Medically Assisted Generation with Knowledge Reduction for Medical Question Answering. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 140–145, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
ReMAG-KR: Retrieval and Medically Assisted Generation with Knowledge Reduction for Medical Question Answering (Murali et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-srw.13.pdf