Yuhong Xu


2024

pdf bib
DMIN: A Discourse-specific Multi-granularity Integration Network for Conversational Aspect-based Sentiment Quadruple Analysis
Peijie Huang | Xisheng Xiao | Yuhong Xu | Jiawei Chen
Findings of the Association for Computational Linguistics ACL 2024

Conversational Aspect-based Sentiment Quadruple Analysis (DiaASQ) aims to extract fine-grained sentiment quadruples from dialogues. Previous research has primarily concentrated on enhancing token-level interactions, still lacking in sufficient modeling of the discourse structure information in dialogue. Firstly, it does not incorporate interactions among different utterances in the encoding stage, resulting in a limited token-level context understanding for subsequent modules. Secondly, it ignores the critical fact that discourse information is naturally organized at the utterance level and learning it solely at the token level is incomplete. In this work, we strengthen the token-level encoder by utilizing a discourse structure called “thread” and graph convolutional networks to enhance the token interaction among different utterances. Moreover, we propose an utterance-level encoder to learn the structured speaker and reply information, providing a macro understanding of dialogue discourse. Furthermore, we introduce a novel Multi-granularities Integrator to integrate token-level and utterance-level representations, resulting in a comprehensive and cohesive dialogue contextual understanding. Experiments on two datasets demonstrate that our model achieves state-of-the-art performance. Our codes are publicly available at https://github.com/SIGSDSscau/DMIN.

2023

pdf bib
基于多意图融合框架的联合意图识别和槽填充(A Multi-Intent Fusion Framework for Joint Intent Detection and Slot Filling)
Shangjian Yin (尹商鉴) | Peijie Huang (黄沛杰) | Dongzhu Liang (梁栋柱) | Zhuoqi He (何卓棋) | Qianer Li (黎倩尔) | Yuhong Xu (徐禹洪)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“近年来,多意图口语理解(SLU)已经成为自然语言处理领域的研究热点。当前先进的多意图SLU模型采用图-交互式框架进行联合多意图识别和槽位填充,能够有效地捕捉到词元级槽位填充任务的细粒度意图信息,取得了良好的性能。但是,它忽略了联合作用下的意图所包含的丰富信息,没有充分利用多意图信息对槽填充任务进行指引。为此,本文提出了一种基于多意图融合框架(MIFF)的联合多意图识别和槽填充框架,使得模型能够在准确地识别不同意图的同时,利用意图信息为槽填充任务提供更充分的指引。我们在MixATIS和MixSNIPS两个公共数据集上进行了实验,结果表明,我们的模型在性能和效率方面均超过了当前最先进的方法,同时能够有效从单领域数据集泛化到多领域数据集上。”

pdf bib
基于互信息最大化和对比损失的多模态对话情绪识别模型(Multimodal Emotion Recognition in Conversation with Mutual Information Maximization and Contrastive Loss)
Qianer Li (黎倩尔) | Peijie Huang (黄沛杰) | Jiawei Chen (陈佳炜) | Jialin Wu (吴嘉林) | Yuhong Xu (徐禹洪) | Peiyuan Lin (林丕源)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“多模态的对话情绪识别(emotion recognition in conversation,ERC)是构建情感对话系统的关键。近年来基于图的融合方法在会话中动态聚合多模态上下文特征,提高了模型在多模态对话情绪识别方面的性能。然而,这些方法都没有充分保留和利用输入数据中的有价值的信息。具体地说,它们都没有保留从输入到融合结果的任务相关信息,并且忽略了标签本身蕴含的信息。本文提出了一种基于互信息最大化和对比损失的多模态对话情绪识别模型MMIC来解决上述的问题。模型通过在输入级和融合级上分级最大化模态之间的互信息(mutual information),使任务相关信息在融合过程中得以保存,从而生成更丰富的多模态表示。本文还在基于图的动态融合网络中引入了监督对比学习(supervised contrastive learning),通过充分利用标签蕴含的信息,使不同情绪相互排斥,增强了模型识别相似情绪的能力。在两个英文和一个中文的公共数据集上的大量实验证明了所提出模型的有效性和优越性。此外,在所提出模型上进行的案例探究有效地证实了模型可以有效保留任务相关信息,更好地区分出相似的情绪。消融实验和可视化结果证明了模型中每个模块的有效性。”

2015

pdf bib
Chinese Grammatical Error Diagnosis System Based on Hybrid Model
Xiupeng Wu | Peijie Huang | Jundong Wang | Qingwen Guo | Yuhong Xu | Chuping Chen
Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications