Jiwen Zhang

Also published as: 霁雯


2025

pdf bib
VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models
Zejun Li | Ruipu Luo | Jiwen Zhang | Minghui Qiu | Xuanjing Huang | Zhongyu Wei
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

2024

pdf bib
从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,)
Zejun Li (李泽君) | Jiwen Zhang (张霁雯) | Ye Wang (王晔) | Mengfei Du (杜梦飞) | Qingwen Liu (刘晴雯) | Dianyi Wang (王殿仪) | Binhao Wu (吴斌浩) | Ruipu Luo (罗瑞璞) | Xuanjing Huang (黄萱菁) | Zhongyu Wei (魏忠钰)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum)

“多媒体信息在人类社会的发展历程中有着至关重要的作用,构建具有多模态信息处理能力的智能系统也是通往通用人工智能的必经之路。随着预训练技术的发展以及对于通用模型的需求,多模态的研究也从早期的任务特定的方法转移到了构建统一泛用的多模态基座模型上。初步的统一多模态模型探索受到BERT启发,从表征学习的角度出发构建能为不同下游任务提供有效初始化的多模态预训练模型,这类方法尽管有效但仍然在泛用性方面受限于预训练中微调范式,无法更广泛高效地应用。近年来随着大语言模型的发展,以大语言模型为基座的多模态大模型则展现出了巨大的潜力:此类模型有着强大的信息感知,交互,以及推理能力并且能有效泛化到多样的场景下,为新时代的通用人工智能系统提供了切实可行的思路。本文将从构建统一多模态模型的角度出发,介绍和梳理相关工作的发展,从多模态预训练到多模态大模型,介绍对应的架构,训练,评测方法以及发展趋势,为读者提供一个全面的概览。”

pdf bib
Android in the Zoo: Chain-of-Action-Thought for GUI Agents
Jiwen Zhang | Jihao Wu | Teng Yihua | Minghui Liao | Nuo Xu | Xiao Xiao | Zhongyu Wei | Duyu Tang
Findings of the Association for Computational Linguistics: EMNLP 2024

Large language model (LLM) leads to a surge of autonomous GUI agents for smartphone, which completes a task triggered by natural language through predicting a sequence of actions of API. Even though the task highly relies on past actions and visual observations, existing studies typically consider little semantic information carried out by intermediate screenshots and screen operations. To address this, this work presents Chain-of-Action-Thought (dubbed CoAT), which takes the description of the previous actions, the current screen, and more importantly the action thinking of what actions should be performed and the outcomes led by the chosen action. We demonstrate that, in a zero-shot setting upon three off-the-shelf LMMs, CoAT significantly improves the action prediction compared to previous proposed context modeling. To further facilitate the research in this line, we construct a dataset Android-In-The-Zoo (AitZ), which contains 18,643 screen-action pairs together with chain-of-action-thought annotations. Experiments show that fine-tuning a 1B model (i.e. AUTO-UI-base) on our AitZ dataset achieves on-par performance with CogAgent-Chat-18B.

pdf bib
DELAN: Dual-Level Alignment for Vision-and-Language Navigation by Cross-Modal Contrastive Learning
Mengfei Du | Binhao Wu | Jiwen Zhang | Zhihao Fan | Zejun Li | Ruipu Luo | Xuanjing Huang | Zhongyu Wei
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Vision-and-Language navigation (VLN) requires an agent to navigate in unseen environment by following natural language instruction. For task completion, the agent needs to align and integrate various navigation modalities, including instruction, observation and navigation history. Existing works primarily concentrate on cross-modal attention at the fusion stage to achieve this objective. Nevertheless, modality features generated by disparate uni-encoders reside in their own spaces, leading to a decline in the quality of cross-modal fusion and decision. To address this problem, we propose a Dual-levEL AligNment (DELAN) framework by cross-modal contrastive learning. This framework is designed to align various navigation-related modalities before fusion, thereby enhancing cross-modal interaction and action decision-making. Specifically, we divide the pre-fusion alignment into dual levels: instruction-history level and landmark-observation level according to their semantic correlations. We also reconstruct a dual-level instruction for adaptation to the dual-level alignment. As the training signals for pre-fusion alignment are extremely limited, self-supervised contrastive learning strategies are employed to enforce the matching between different modalities. Our approach seamlessly integrates with the majority of existing models, resulting in improved navigation performance on various VLN benchmarks, including R2R, R4R, RxR and CVDN.