Shunfan Zheng
2025
ACE-M3: Automatic Capability Evaluator for Multimodal Medical Models
Xiechi Zhang
|
Shunfan Zheng
|
Linlin Wang
|
Gerard de Melo
|
Zhu Cao
|
Xiaoling Wang
|
Liang He
Proceedings of the 31st International Conference on Computational Linguistics
As multimodal large language models (MLLMs) gain prominence in the medical field, the need for precise evaluation methods to assess their effectiveness has become critical. While benchmarks provide a reliable means to evaluate the capabilities of MLLMs, traditional metrics like ROUGE and BLEU employed for open domain evaluation only focus on token overlap and may not align with human judgment. While human evaluation is more reliable, it is labor-intensive, costly, and not scalable. LLM-based evaluation methods have proven promising, but to date, there is still an urgent need for open-source multimodal LLM-based evaluators in the medical field. To address this issue, we introduce ACE-M3, an open-sourced Automatic Capability Evaluator for Multimodal Medical Models that specifically designed to assess the question answering abilities of medical MLLMs. It first utilizes a branch-merge architecture to provide both detailed analysis and a concise final score based on standard medical evaluation criteria. Subsequently, a reward token-based direct preference optimization (RTDPO) strategy is incorporated to save training time without compromising performance of our model. Extensive experiments have demonstrated the effectiveness of our ACE-M3 model in evaluating the capabilities of medical MLLMs.
Search
Fix data
Co-authors
- Zhu Cao 1
- Gerard De Melo 1
- Liang He 1
- Linlin Wang 1
- Xiaoling Wang 1
- show all...