Xiang He
2025
DrAgent: Empowering Large Language Models as Medical Agents for Multi-hop Medical Reasoning
Fenglin Liu
|
Zheng Li
|
Hongjian Zhou
|
Qingyu Yin
|
Jingfeng Yang
|
Xin Liu
|
Zhengyang Wang
|
Xianfeng Tang
|
Shiyang Li
|
Xiang He
|
Ruijie Wang
|
Bing Yin
|
Xiao Gu
|
Lei Clifton
|
David A. Clifton
Findings of the Association for Computational Linguistics: EMNLP 2025
Although large language models (LLMs) have demonstrated outperforming human experts in medical examinations, it remains challenging to adopt LLMs in real-world clinical decision-making that typically involves multi-hop medical reasoning. Common practices include prompting commercial LLMs and fine-tuning LLMs on medical data. However, in the clinical domain, using commercial LLMs raises privacy concerns regarding sensitive patient data. Fine-tuning competitive medical LLMs for different tasks usually requires extensive data and computing resources, which are difficult to acquire, especially in medical institutions with limited infrastructure. We propose DrAgent, which can build LLMs as agents to deliver accurate medical decision-making and reasoning. In implementation, we take a lightweight LLM as the backbone to collaborate with diverse clinical tools. To make efficient use of data, DrAgent introduces recursive curriculum learning to optimize the LLM in an easy-to-hard progression. The results show that our approach achieves competitive performance on diverse datasets.
Search
Fix author
Co-authors
- Lei Clifton 1
- David A. Clifton 1
- Xiao Gu 1
- Zheng Li 1
- Shiyang Li 1
- show all...