Jiayuan Ding
2024
MMedAgent: Learning to Use Medical Tools with Multi-modal Agent
Binxu Li
|
Tiankai Yan
|
Yuanting Pan
|
Jie Luo
|
Ruiyang Ji
|
Jiayuan Ding
|
Zhe Xu
|
Shilong Liu
|
Haoyu Dong
|
Zihao Lin
|
Yixin Wang
Findings of the Association for Computational Linguistics: EMNLP 2024
Multi-Modal Large Language Models (MLLMs), despite being successful, exhibit limited generality and often fall short when compared to specialized models. Recently, LLM-based agents have been developed to address these challenges by selecting appropriate specialized models as tools based on user inputs. However, such advancements have not been extensively explored within the medical domain. To bridge this gap, this paper introduces the first agent explicitly designed for the medical field, named Multi-modal Medical Agent (MMedAgent). We curate an instruction-tuning dataset comprising six medical tools solving seven tasks across five modalities, enabling the agent to choose the most suitable tools for a given task. Comprehensive experiments demonstrate that MMedAgent achieves superior performance across a variety of medical tasks compared to state-of-the-art open-source methods and even the closed-source model, GPT-4o. Furthermore, MMedAgent exhibits efficiency in updating and integrating new medical tools.
2023
Are Message Passing Neural Networks Really Helpful for Knowledge Graph Completion?
Juanhui Li
|
Harry Shomer
|
Jiayuan Ding
|
Yiqi Wang
|
Yao Ma
|
Neil Shah
|
Jiliang Tang
|
Dawei Yin
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Knowledge graphs (KGs) facilitate a wide variety of applications. Despite great efforts in creation and maintenance, even the largest KGs are far from complete. Hence, KG completion (KGC) has become one of the most crucial tasks for KG research. Recently, considerable literature in this space has centered around the use of Message Passing (Graph) Neural Networks (MPNNs), to learn powerful embeddings. The success of these methods is naturally attributed to the use of MPNNs over simpler multi-layer perceptron (MLP) models, given their additional message passing (MP) component. In this work, we find that surprisingly, simple MLP models are able to achieve comparable performance to MPNNs, suggesting that MP may not be as crucial as previously believed. With further exploration, we show careful scoring function and loss function design has a much stronger influence on KGC model performance. This suggests a conflation of scoring function design, loss function design, and MP in prior work, with promising insights regarding the scalability of state-of-the-art KGC methods today, as well as careful attention to more suitable MP designs for KGC tasks tomorrow.
Search
Co-authors
- Juanhui Li 1
- Harry Shomer 1
- Yiqi Wang 1
- Yao Ma 1
- Neil Shah 1
- show all...