Yun Xue

Also published as:


2025

pdf bib
Ambiguity-aware Multi-level Incongruity Fusion Network for Multi-Modal Sarcasm Detection
Kuntao Li | Yifan Chen | Qiaofeng Wu | Weixing Mai | Fenghuan Li | Yun Xue
Proceedings of the 31st International Conference on Computational Linguistics

Multi-modal sarcasm detection aims to identify whether a given image-text pair is sarcastic. The pivotal factor of the task lies in accurately capturing incongruities from different modalities. Although existing studies have achieved impressive success, they primarily committed to fusing the textual and visual information to establish cross-modal correlations, overlooking the significance of original unimodal incongruity information at the text-level and image-level. Furthermore, the utilized fusion strategies of cross-modal information neglected the effect of inherent ambiguity within text and image modalities on multimodal fusion. To overcome these limitations, we propose a novel Ambiguity-aware Multi-level Incongruity Fusion Network (AMIF) for multi-modal sarcasm detection. Our method involves a multi-level incongruity learning module to capture the incongruity information simultaneously at the text-level, image-level and cross-modal-level. Additionally, an ambiguity-based fusion module is developed to dynamically learn reasonable weights and interpretably aggregate incongruity features from different levels. Comprehensive experiments conducted on a publicly available dataset demonstrate the superiority of our proposed model over state-of-the-art methods.

pdf bib
MSG-LLM: A Multi-scale Interactive Framework for Graph-enhanced Large Language Models
Jiayu Ding | Zhangkai Zheng | Benshuo Lin | Yun Xue | Yiping Song
Proceedings of the 31st International Conference on Computational Linguistics

Graph-enhanced large language models (LLMs) leverage LLMs’ remarkable ability to model language and use graph structures to capture topological relationships. Existing graph-enhanced LLMs typically retrieve similar subgraphs to augment LLMs, where the subgraphs carry the entities related to our target and relations among the entities. However, the retrieving methods mainly focus solely on accurately matching subgraphs between our target subgraph and the candidate subgraphs at the same scale, neglecting that the subgraphs with different scales may also share similar semantics or structures. To tackle this challenge, we introduce a graph-enhanced LLM with multi-scale retrieval (MSG-LLM). It captures similar graph structures and semantics across graphs at different scales and bridges the graph alignment across multiple scales. The larger scales maintain the graph’s global information, while the smaller scales preserve the details of fine-grained sub-structures. Specifically, we construct a multi-scale variation to dynamically shrink the scale of graphs. Further, we employ a graph kernel search to discover subgraphs from the entire graph, which essentially achieves multi-scale graph retrieval in Hilbert space. Additionally, we propose to conduct multi-scale interactions (message passing) over graphs at various scales to integrate key information. The interaction also bridges the graph and LLMs, helping with graph retrieval and LLM generation. Finally, we employ a Chain-of-Thought-based LLM prediction to perform the downstream tasks. We evaluate our approach on two graph-based downstream tasks and the experimental results show that our method achieves state-of-the-art performance.

2024

pdf bib
D2R: Dual-Branch Dynamic Routing Network for Multimodal Sentiment Detection
Yifan Chen | Kuntao Li | Weixing Mai | Qiaofeng Wu | Yun Xue | Fenghuan Li
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

pdf bib
Semantics-Aware Dual Graph Convolutional Networks for Argument Pair Extraction
Minzhao Guan | Zhixun Qiu | Fenghuan Li | Yun Xue
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Argument pair extraction (APE) is a task that aims to extract interactive argument pairs from two argument passages. Generally, existing works focus on either simple argument interaction or task form conversion, instead of thorough deep-level feature exploitation of argument pairs. To address this issue, a Semantics-Aware Dual Graph Convolutional Networks (SADGCN) is proposed for APE. Specifically, the co-occurring word graph is designed to tackle the lexical and semantic relevance of arguments with a pre-trained Rouge-guided Transformer (ROT). Considering the topic relevance in argument pairs, a topic graph is constructed by the neural topic model to leverage the topic information of argument passages. The two graphs are fused via a gating mechanism, which contributes to the extraction of argument pairs. Experimental results indicate that our approach achieves the state-of-the-art performance. The performance on F1 score is significantly improved by 6.56% against the existing best alternative.

2021

pdf bib
Dynamic and Multi-Channel Graph Convolutional Networks for Aspect-Based Sentiment Analysis
Shiguan Pang | Yun Xue | Zehao Yan | Weihao Huang | Jinhui Feng
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
基于层次注意力机制和门机制的属性级别情感分析(Aspect-level Sentiment Analysis Based on Hierarchical Attention and Gate Networks)
Chao Feng (冯超) | Haihui Li (黎海辉) | Hongya Zhao (赵洪雅) | Yun Xue (薛云) | Jingyao Tang (唐靖尧)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

近年来,作为细粒度的属性级别情感分析在商业界和学术界受到越来越多的关注,其目的在于识别一个句子中多个属性词所对应的情感极性。目前,在解决属性级别情感分析问题的绝大多数工作都集中在注意力机制的设计上,以此突出上下文和属性词中不同词对于属性级别情感分析的贡献,同时使上下文和属性词之间相互关联。本文提出使用层次注意力机制和门机制处理属性级别情感分析任务,在得到属性词的隐藏状态之后,通过注意力机制得到属性词新的表示,然后利用属性词新的表示和注意力机制进一步得到上下文新的表示,层次注意力机制的设计使得上下文和属性词的表达更加准确;同时通过门机制选择对属性词而言上下文中有用的信息,以此丰富上下文的表达,在SemEval 2014 Task4和Twitter数据集上的实验结果表明本文提出模型的有效性。