Guiming Hardy Chen
2024
Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale
Junying Chen
|
Chi Gui
|
Ruyi Ouyang
|
Anningzhe Gao
|
Shunian Chen
|
Guiming Hardy Chen
|
Xidong Wang
|
Zhenyang Cai
|
Ke Ji
|
Xiang Wan
|
Benyou Wang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
The rapid development of multimodal large language models (MLLMs), such as GPT-4V, has led to significant advancements. However, these models still face challenges in medical multimodal capabilities due to limitations in the quantity and quality of medical vision-text data, stemming from data privacy concerns and high annotation costs. While pioneering approaches utilize PubMed’s large-scale, de-identified medical image-text pairs to address these limitations, they often fall short due to inherent data noise. To tackle this, we refined medical image-text pairs from PubMed and employed MLLMs (GPT-4V) in an ‘unblinded’ capacity to denoise and reformat the data, resulting in the creation of the **PubMedVision** dataset with 1.3 million medical VQA samples. Our validation demonstrates that: (1) PubMedVision can significantly enhance the medical multimodal capabilities of MLLMs, showing significant improvement in benchmarks including the MMMU Health & Medicine track; (2) manual checks by medical experts and empirical results validate the superior data quality of our dataset compared to other data construction methods. Using PubMedVision, we train a 34B medical MLLM **HuatuoGPT-Vision**, which shows superior performance in medical multimodal scenarios among open-source MLLMs. Our code and data are available at https://github.com/FreedomIntelligence/HuatuoGPT-Vision.
Humans or LLMs as the Judge? A Study on Judgement Bias
Guiming Hardy Chen
|
Shunian Chen
|
Ziche Liu
|
Feng Jiang
|
Benyou Wang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Adopting human and large language models (LLM) as judges (*a.k.a* human- and LLM-as-a-judge) for evaluating the performance of LLMs has recently gained attention. Nonetheless, this approach concurrently introduces potential biases from human and LLMs, questioning the reliability of the evaluation results. In this paper, we propose a novel framework that is free from referencing groundtruth annotations for investigating **Misinformation Oversight Bias**, **Gender Bias**, **Authority Bias** and **Beauty Bias** on LLM and human judges. We curate a dataset referring to the revised Bloom’s Taxonomy and conduct thousands of evaluations. Results show that human and LLM judges are vulnerable to perturbations to various degrees, and that even the cutting-edge judges possess considerable biases. We further exploit these biases to conduct attacks on LLM judges. We hope that our work can notify the community of the bias and vulnerability of human- and LLM-as-a-judge, as well as the urgency of developing robust evaluation systems.
Search
Co-authors
- Shunian Chen 2
- Benyou Wang 2
- Junying Chen 1
- Chi Gui 1
- Ruyi Ouyang 1
- show all...