2025
pdf
bib
abs
DialogueMMT: Dialogue Scenes Understanding Enhanced Multi-modal Multi-task Tuning for Emotion Recognition in Conversations
ChenYuan He
|
Senbin Zhu
|
Hongde Liu
|
Fei Gao
|
Yuxiang Jia
|
Hongying Zan
|
Min Peng
Proceedings of the 31st International Conference on Computational Linguistics
Emotion recognition in conversations (ERC) has garnered significant attention from the research community. However, due to the complexity of visual scenes and dialogue contextual dependencies in conversations, previous ERC methods fail to handle emotional cues from both visual sources and discourse structures. Furthermore, existing state-of-the-art ERC models are trained and tested separately on each single ERC dataset, not verifying their effectiveness across multiple datasets simultaneously. To address these challenges, this paper proposes an innovative framework for ERC, called Dialogue Scenes Understanding Enhanced Multi-modal Multi-task Tuning (DialogueMMT). More concretely, a novel video-language connector is applied within the large vision-language model for capturing video features effectively. Additionally, we utilize multi-task instruction tuning with a unified ERC dataset to enhance the model’s understanding of multi-modal dialogue scenes and employ a chain-of-thought strategy to improve emotion classification performance. Extensive experimental results on three benchmark ERC datasets indicate that the proposed DialogueMMT framework consistently outperforms existing state-of-the-art approaches in terms of overall performance.
pdf
bib
abs
GenWebNovel: A Genre-oriented Corpus of Entities in Chinese Web Novels
Hanjie Zhao
|
Yuchen Yan
|
Senbin Zhu
|
Hongde Liu
|
Yuxiang Jia
|
Hongying Zan
|
Min Peng
Proceedings of the 31st International Conference on Computational Linguistics
Entities are important to understanding literary works, which emphasize characters, plots and environment. The research on entity recognition, especially nested entity recognition in the literary domain is still insufficient partly due to insufficient annotated data. To address this issue, we construct the first Genre-oriented Corpus for Entity Recognition in Chinese Web Novels, namely GenWebNovel, comprising 400 chapters totaling 1,214,283 tokens under two genres, XuanHuan (Eastern Fantasy) and History. Based on the corpus, we analyze the distribution of different types of entities, including person, location, and organization. We also compare the nesting patterns of nested entities between GenWebNovel and the English corpus LitBank. Even though both belong to the literary domain, entities in different genres share few overlaps, making genre adaptation of NER (Named Entity Recognition) a hard problem. We propose a novel method that utilizes a pre-trained language model as an In-context learning example retriever to boost the performance of large language models. Our experiments show that this approach significantly enhances entity recognition, matching state-of-the-art (SOTA) models without requiring additional training data. Our code, dataset, and model are available at https://github.com/hjzhao73/GenWebNovel.
pdf
bib
abs
SILC-EFSA: Self-aware In-context Learning Correction for Entity-level Financial Sentiment Analysis
Senbin Zhu
|
ChenYuan He
|
Hongde Liu
|
Pengcheng Dong
|
Hanjie Zhao
|
Yuchen Yan
|
Yuxiang Jia
|
Hongying Zan
|
Min Peng
Proceedings of the 31st International Conference on Computational Linguistics
In recent years, fine-grained sentiment analysis in finance has gained significant attention, but the scarcity of entity-level datasets remains a key challenge. To address this, we have constructed the largest English and Chinese financial entity-level sentiment analysis datasets to date. Building on this foundation, we propose a novel two-stage sentiment analysis approach called Self-aware In-context Learning Correction (SILC). The first stage involves fine-tuning a base large language model to generate pseudo-labeled data specific to our task. In the second stage, we train a correction model using a GNN-based example retriever, which is informed by the pseudo-labeled data. This two-stage strategy has allowed us to achieve state-of-the-art performance on the newly constructed datasets, advancing the field of financial sentiment analysis. In a case study, we demonstrate the enhanced practical utility of our data and methods in monitoring the cryptocurrency market. Our datasets and code are available at https://github.com/NLP-Bin/SILC-EFSA.
2024
pdf
bib
abs
FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based Sentiment Analysis
Songhua Yang
|
Xinke Jiang
|
Hanjie Zhao
|
Wenxuan Zeng
|
Hongde Liu
|
Yuxiang Jia
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Multi-domain aspect-based sentiment analysis (ABSA) seeks to capture fine-grained sentiment across diverse domains. While existing research narrowly focuses on single-domain applications constrained by methodological limitations and data scarcity, the reality is that sentiment naturally traverses multiple domains. Although large language models (LLMs) offer a promising solution for ABSA, it is difficult to integrate effectively with established techniques, including graph-based models and linguistics, because modifying their internal architecture is not easy. To alleviate this problem, we propose a novel framework, Feature-aware In-context Learning for Multi-domain ABSA (FaiMA). The core insight of FaiMA is to utilize in-context learning (ICL) as a feature-aware mechanism that facilitates adaptive learning in multi-domain ABSA tasks. Specifically, we employ a multi-head graph attention network as a text encoder optimized by heuristic rules for linguistic, domain, and sentiment features. Through contrastive learning, we optimize sentence representations by focusing on these diverse features. Additionally, we construct an efficient indexing mechanism, allowing FaiMA to stably retrieve highly relevant examples across multiple dimensions for any given input. To evaluate the efficacy of FaiMA, we build the first multi-domain ABSA benchmark dataset. Extensive experimental results demonstrate that FaiMA achieves significant performance improvements in multiple domains compared to baselines, increasing F1 by 2.07% on average. Source code and data sets are available at https://github.com/SupritYoung/FaiMA.
pdf
bib
abs
MRC-based Nested Medical NER with Co-prediction and Adaptive Pre-training
Xiaojing Du
|
Hanjie Zhao
|
Danyan Xing
|
Yuxiang Jia
|
Hongying Zan
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In medical information extraction, medical Named Entity Recognition (NER) is indispensable, playing a crucial role in developing medical knowledge graphs, enhancing medical question-answering systems, and analyzing electronic medical records. The challenge in medical NER arises from the complex nested structures and sophisticated medical terminologies, distinguishing it from its counterparts in traditional domains. In response to these complexities, we propose a medical NER model based on Machine Reading Comprehension (MRC), which uses a task-adaptive pre-training strategy to improve the model’s capability in the medical field. Meanwhile, our model introduces multiple word-pair embeddings and multi-granularity dilated convolution to enhance the model’s representation ability and uses a combined predictor of Biaffine and MLP to improve the model’s recognition performance. Experimental evaluations conducted on the CMeEE, a benchmark for Chinese nested medical NER, demonstrate that our proposed model outperforms the compared state-of-the-art (SOTA) models.
pdf
bib
abs
ZZU-NLP at SIGHAN-2024 dimABSA Task: Aspect-Based Sentiment Analysis with Coarse-to-Fine In-context Learning
Senbin Zhu
|
Hanjie Zhao
|
Wxr Wxr
|
18437919080@163.com 18437919080@163.com
|
Yuxiang Jia
|
Hongying Zan
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
The DimABSA task requires fine-grained sentiment intensity prediction for restaurant reviews, including scores for Valence and Arousal dimensions for each Aspect Term. In this study, we propose a Coarse-to-Fine In-context Learning(CFICL) method based on the Baichuan2-7B model for the DimABSA task in the SIGHAN 2024 workshop. Our method improves prediction accuracy through a two-stage optimization process. In the first stage, we use fixed in-context examples and prompt templates to enhance the model’s sentiment recognition capability and provide initial predictions for the test data. In the second stage, we encode the Opinion field using BERT and select the most similar training data as new in-context examples based on similarity. These examples include the Opinion field and its scores, as well as related opinion words and their average scores. By filtering for sentiment polarity, we ensure that the examples are consistent with the test data. Our method significantly improves prediction accuracy and consistency by effectively utilizing training data and optimizing in-context examples, as validated by experimental results.
2023
pdf
bib
A Corpus for Named Entity Recognition in Chinese Novels with Multi-genres
Hanjie Zhao
|
Jinge Xie
|
Yuchen Yan
|
Yuxiang Jia
|
Yawen Ye
|
Hongying Zan
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2022
pdf
bib
abs
MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation
Shuo Xu
|
Yuxiang Jia
|
Changyong Niu
|
Hongying Zan
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Emotion recognition in conversation is important for an empathetic dialogue system to understand the user’s emotion and then generate appropriate emotional responses. However, most previous researches focus on modeling conversational contexts primarily based on the textual modality or simply utilizing multimodal information through feature concatenation. In order to exploit multimodal information and contextual information more effectively, we propose a multimodal directed acyclic graph (MMDAG) network by injecting information flows inside modality and across modalities into the DAG architecture. Experiments on IEMOCAP and MELD show that our model outperforms other state-of-the-art models. Comparative studies validate the effectiveness of the proposed modality fusion method.
2021
pdf
bib
abs
融入篇章信息的文学作品命名实体识别(Document-level Literary Named Entity Recognition)
Yuxiang Jia (贾玉祥)
|
Rui Chao (晁睿)
|
Hongying Zan (昝红英)
|
Huayi Dou (窦华溢)
|
Shuai Cao (曹帅)
|
Shuo Xu (徐硕)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
命名实体识别是文学作品智能分析的基础性工作,当前文学领域命名实体识别的研究还较薄弱,一个主要的原因是缺乏标注语料。本文从金庸小说入手,对两部小说180余万字进行了命名实体的标注,共标注4类实体5万多个。针对小说文本的特点,本文提出融入篇章信息的命名实体识别模型,引入篇章字典保存汉字的历史状态,利用可信度计算融合BiGRU-CRF与Transformer模型。实验结果表明,利用篇章信息有效地提升了命名实体识别的效果。最后,我们还探讨了命名实体识别在小说社会网络构建中的应用。
2012
pdf
bib
A Comparison of Chinese Word Segmentation on News and Microblog Corpora with a Lexicon Based Method
Yuxiang Jia
|
Hongying Zan
|
Ming Fan
|
Zhimin Wang
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing
2010
pdf
bib
Chinese Word Sense Induction with Basic Clustering Algorithms
Yuxiang Jia
|
Shiwen Yu
|
Zhengyan Chen
CIPS-SIGHAN Joint Conference on Chinese Language Processing
2009
pdf
bib
A Noisy Channel Model for Grapheme-based Machine Transliteration
Yuxiang Jia
|
Danqing Zhu
|
Shiwen Yu
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)
pdf
bib
Chinese Semantic Class Learning from Web Based on Concept-Level Characteristics
Wenbo Pang
|
Xiaozhong Fan
|
Jiangde Yu
|
Yuxiang Jia
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 1
2008
pdf
bib
Unsupervised Chinese Verb Metaphor Recognition Based on Selectional Preferences
Yuxiang Jia
|
Shiwen Yu
Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation