Yanxu Ji
2025
CCL25-Eval任务12总结报告:面向中文语音的实体关系三元组抽取
Wenxuan Mu | Jinzhong Ning | Yilin Pan | Paerhati Tulajiang | Yuanyuan Sun | SongTao Li | Yanxu Ji | Weiming Yin | Yijia Zhang | Hongfei Lin
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Wenxuan Mu | Jinzhong Ning | Yilin Pan | Paerhati Tulajiang | Yuanyuan Sun | SongTao Li | Yanxu Ji | Weiming Yin | Yijia Zhang | Hongfei Lin
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"中文语音实体关系三元组抽取任务(Chinese Speech Entity-Relation Triple Extraction Task, CSRTE)是第二十四届中国计算语言学大会中的一项技术评测,旨在从中文语音数据中自动识别并提取实体及其相互关系,构建结构化的语音关系三元组(头实体、关系、尾实体)。本任务的目标是提升中文语音关系三元组抽取的准确性与效率,增强模型在不同语境和复杂语音场景下的鲁棒性,实现从语音输入到文本三元组输出的全流程自动化处理。通过本次评测,有助于推动中文语音信息抽取技术的发展,促进语音与自然语言处理技术的深度融合,为智能应用提供更加丰富且精准的基础数据支持。此次评测共有257支队伍报名参赛,其中59支队伍提交了A榜成绩。成绩排名前15的队伍晋级A榜,并且表现突出的前朷支队伍提交了技术报告。"
LLM-Driven Implicit Target Augmentation and Fine-Grained Contextual Modeling for Zero-Shot and Few-Shot Stance Detection
Yanxu Ji | Jinzhong Ning | Yijia Zhang | Zhi Liu | Hongfei Lin
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yanxu Ji | Jinzhong Ning | Yijia Zhang | Zhi Liu | Hongfei Lin
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Stance detection aims to identify the attitude expressed in text towards a specific target. Recent studies on zero-shot and few-shot stance detection focus primarily on learning generalized representations from explicit targets. However, these methods often neglect implicit yet semantically important targets and fail to adaptively adjust the relative contributions of text and target in light of contextual dependencies. To overcome these limitations, we propose a novel two-stage framework: First, a data augmentation framework named Hierarchical Collaborative Target Augmentation (HCTA) employs Large Language Models (LLMs) to identify and annotate implicit targets via Chain-of-Thought (CoT) prompting and multi-LLM voting, significantly enriching training data with latent semantic relations. Second, we introduce DyMCA, a Dynamic Multi-level Context-aware Attention Network, integrating a joint text-target encoding and a content-aware mechanism to dynamically adjust text-target contributions based on context. Experiments on the benchmark dataset demonstrate that our approach achieves state-of-the-art results, confirming the effectiveness of implicit target augmentation and fine-grained contextual modeling.