Huang Xuanjing

Also published as: 萱菁


2024

pdf bib
基于大模型的交互式谎言识别:数据和模型(Unveiling Lies: Enhancing Large Language Models for Real-World Lie Detection in Interactive Dialogues)
Ji Chengwei (纪程炜) | Wang Siyuan (王思远) | Li Taishan (李太山) | Mou Xinyi (牟馨忆) | Zhao Limin (赵丽敏) | Xue Lanqing (薛兰青) | Ying Zhenzhe (应缜哲) | Wang Weiqiang (王维强) | Huang Xuanjing (黄萱菁) | Wei Zhongyu (魏忠钰)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“面向对话交互过程的谎言识别技术在不同的应用场景有广泛的应用需求。现有的鉴谎技术往往在整体的对话级别上给出最终决策,而缺乏对细粒度谎言特征和线索的逻辑分析,难以满足场景中对于可解释性的需求。本文提出了谎言指征和语义不一致线索的概念,用于帮助识别对话中的谎言,提升鉴谎方法的可解释性。文章同时提出一个谎言识别框架,用于训练谎言识别大语言模型(LD-LLM)。它利用细粒度的谎言指征并且发现对话中是否存在语义不一致线索,以实现更可靠的谎言识别。文章在真实交互场景中构建了两个谎言识别数据集FinLIE和IDLIE,分别关注金融风控场景和身份识别场景。实验结果表明,基于这两个数据集创建的指令数据集微调得到的LD-LLM,在基于真实交互的谎言识别上达到了最先进的水平。”

pdf bib
从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,)
Li Zejun (李泽君) | Zhang Jiwen (张霁雯) | Wang Ye (王晔) | Du Mengfei (杜梦飞) | Liu Qingwen (刘晴雯) | Wang Dianyi (王殿仪) | Wu Binhao (吴斌浩) | Luo Ruipu (罗瑞璞) | Huang Xuanjing (黄萱菁) | Wei Zhongyu (魏忠钰)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum)

“多媒体信息在人类社会的发展历程中有着至关重要的作用,构建具有多模态信息处理能力的智能系统也是通往通用人工智能的必经之路。随着预训练技术的发展以及对于通用模型的需求,多模态的研究也从早期的任务特定的方法转移到了构建统一泛用的多模态基座模型上。初步的统一多模态模型探索受到BERT启发,从表征学习的角度出发构建能为不同下游任务提供有效初始化的多模态预训练模型,这类方法尽管有效但仍然在泛用性方面受限于预训练中微调范式,无法更广泛高效地应用。近年来随着大语言模型的发展,以大语言模型为基座的多模态大模型则展现出了巨大的潜力:此类模型有着强大的信息感知,交互,以及推理能力并且能有效泛化到多样的场景下,为新时代的通用人工智能系统提供了切实可行的思路。本文将从构建统一多模态模型的角度出发,介绍和梳理相关工作的发展,从多模态预训练到多模态大模型,介绍对应的架构,训练,评测方法以及发展趋势,为读者提供一个全面的概览。”

2023

pdf bib
Rethinking Label Smoothing on Multi-hop Question Answering
Yin Zhangyue | Wang Yuxin | Hu Xiannian | Wu Yiguang | Yan Hang | Zhang Xinyu | Cao Zhao | Huang Xuanjing | Qiu Xipeng
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“Multi-Hop Question Answering (MHQA) is a significant area in question answering, requiringmultiple reasoning components, including document retrieval, supporting sentence prediction,and answer span extraction. In this work, we present the first application of label smoothing tothe MHQA task, aiming to enhance generalization capabilities in MHQA systems while miti-gating overfitting of answer spans and reasoning paths in the training set. We introduce a novellabel smoothing technique, F1 Smoothing, which incorporates uncertainty into the learning pro-cess and is specifically tailored for Machine Reading Comprehension (MRC) tasks. Moreover,we employ a Linear Decay Label Smoothing Algorithm (LDLA) in conjunction with curricu-lum learning to progressively reduce uncertainty throughout the training process. Experimenton the HotpotQA dataset confirms the effectiveness of our approach in improving generaliza-tion and achieving significant improvements, leading to new state-of-the-art performance on theHotpotQA leaderboard.”