Yiyang Du
2024
Model Composition for Multimodal Large Language Models
Chi Chen
|
Yiyang Du
|
Zheng Fang
|
Ziyue Wang
|
Fuwen Luo
|
Peng Li
|
Ming Yan
|
Ji Zhang
|
Fei Huang
|
Maosong Sun
|
Yang Liu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent developments in Multimodal Large Language Models (MLLMs) have shown rapid progress, moving towards the goal of creating versatile MLLMs that understand inputs from various modalities. However, existing methods typically rely on joint training with paired multimodal instruction data, which is resource-intensive and challenging to extend to new modalities. In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model. Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters. Furthermore, we introduce DAMC to address parameter interference and mismatch issues during the merging process, thereby enhancing the model performance. To facilitate research in this area, we propose MCUB, a benchmark for assessing ability of MLLMs to understand inputs from diverse modalities. Experiments on this benchmark and four other multimodal understanding tasks show significant improvements over baselines, proving that model composition can create a versatile model capable of processing inputs from multiple modalities.
2022
G4: Grounding-guided Goal-oriented Dialogues Generation with Multiple Documents
Shiwei Zhang
|
Yiyang Du
|
Guanzhong Liu
|
Zhao Yan
|
Yunbo Cao
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
Goal-oriented dialogues generation grounded in multiple documents(MultiDoc2Dial) is a challenging and realistic task. Unlike previous works which treat document-grounded dialogue modeling as a machine reading comprehension task from single document, MultiDoc2Dial task faces challenges of both seeking information from multiple documents and generating conversation response simultaneously. This paper summarizes our entries to agent response generation subtask in MultiDoc2Dial dataset. We propose a three-stage solution, Grounding-guided goal-oriented dialogues generation(G4), which predicts groundings from retrieved passages to guide the generation of the final response. Our experiments show that G4 achieves SacreBLEU score of 31.24 and F1 score of 44.6 which is 60.7% higher than the baseline model.
Search
Co-authors
- Shiwei Zhang 1
- Guanzhong Liu 1
- Zhao Yan 1
- Yunbo Cao 1
- Chi Chen 1
- show all...