从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,)

Li Zejun (李泽君), Zhang Jiwen (张霁雯), Wang Ye (王晔), Du Mengfei (杜梦飞), Liu Qingwen (刘晴雯), Wang Dianyi (王殿仪), Wu Binhao (吴斌浩), Luo Ruipu (罗瑞璞), Huang Xuanjing (黄萱菁), Wei Zhongyu (魏忠钰)


Abstract
“多媒体信息在人类社会的发展历程中有着至关重要的作用,构建具有多模态信息处理能力的智能系统也是通往通用人工智能的必经之路。随着预训练技术的发展以及对于通用模型的需求,多模态的研究也从早期的任务特定的方法转移到了构建统一泛用的多模态基座模型上。初步的统一多模态模型探索受到BERT启发,从表征学习的角度出发构建能为不同下游任务提供有效初始化的多模态预训练模型,这类方法尽管有效但仍然在泛用性方面受限于预训练中微调范式,无法更广泛高效地应用。近年来随着大语言模型的发展,以大语言模型为基座的多模态大模型则展现出了巨大的潜力:此类模型有着强大的信息感知,交互,以及推理能力并且能有效泛化到多样的场景下,为新时代的通用人工智能系统提供了切实可行的思路。本文将从构建统一多模态模型的角度出发,介绍和梳理相关工作的发展,从多模态预训练到多模态大模型,介绍对应的架构,训练,评测方法以及发展趋势,为读者提供一个全面的概览。”
Anthology ID:
2024.ccl-2.1
Volume:
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum)
Month:
July
Year:
2024
Address:
Taiyuan, China
Editor:
Xin Zhao
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
1–33
Language:
Chinese
URL:
https://aclanthology.org/2024.ccl-2.1/
DOI:
Bibkey:
Cite (ACL):
Li Zejun, Zhang Jiwen, Wang Ye, Du Mengfei, Liu Qingwen, Wang Dianyi, Wu Binhao, Luo Ruipu, Huang Xuanjing, and Wei Zhongyu. 2024. 从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,). In Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum), pages 1–33, Taiyuan, China. Chinese Information Processing Society of China.
Cite (Informal):
从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,) (Zejun et al., CCL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.ccl-2.1.pdf