2024
pdf
bib
abs
ZZU-NLP at SIGHAN-2024 dimABSA Task: Aspect-Based Sentiment Analysis with Coarse-to-Fine In-context Learning
Senbin Zhu
|
Hanjie Zhao
|
Wxr Wxr
|
18437919080@163.com 18437919080@163.com
|
Yuxiang Jia
|
Hongying Zan
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
The DimABSA task requires fine-grained sentiment intensity prediction for restaurant reviews, including scores for Valence and Arousal dimensions for each Aspect Term. In this study, we propose a Coarse-to-Fine In-context Learning(CFICL) method based on the Baichuan2-7B model for the DimABSA task in the SIGHAN 2024 workshop. Our method improves prediction accuracy through a two-stage optimization process. In the first stage, we use fixed in-context examples and prompt templates to enhance the model’s sentiment recognition capability and provide initial predictions for the test data. In the second stage, we encode the Opinion field using BERT and select the most similar training data as new in-context examples based on similarity. These examples include the Opinion field and its scores, as well as related opinion words and their average scores. By filtering for sentiment polarity, we ensure that the examples are consistent with the test data. Our method significantly improves prediction accuracy and consistency by effectively utilizing training data and optimizing in-context examples, as validated by experimental results.
pdf
bib
abs
FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based Sentiment Analysis
Songhua Yang
|
Xinke Jiang
|
Hanjie Zhao
|
Wenxuan Zeng
|
Hongde Liu
|
Yuxiang Jia
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Multi-domain aspect-based sentiment analysis (ABSA) seeks to capture fine-grained sentiment across diverse domains. While existing research narrowly focuses on single-domain applications constrained by methodological limitations and data scarcity, the reality is that sentiment naturally traverses multiple domains. Although large language models (LLMs) offer a promising solution for ABSA, it is difficult to integrate effectively with established techniques, including graph-based models and linguistics, because modifying their internal architecture is not easy. To alleviate this problem, we propose a novel framework, Feature-aware In-context Learning for Multi-domain ABSA (FaiMA). The core insight of FaiMA is to utilize in-context learning (ICL) as a feature-aware mechanism that facilitates adaptive learning in multi-domain ABSA tasks. Specifically, we employ a multi-head graph attention network as a text encoder optimized by heuristic rules for linguistic, domain, and sentiment features. Through contrastive learning, we optimize sentence representations by focusing on these diverse features. Additionally, we construct an efficient indexing mechanism, allowing FaiMA to stably retrieve highly relevant examples across multiple dimensions for any given input. To evaluate the efficacy of FaiMA, we build the first multi-domain ABSA benchmark dataset. Extensive experimental results demonstrate that FaiMA achieves significant performance improvements in multiple domains compared to baselines, increasing F1 by 2.07% on average. Source code and data sets are available at https://github.com/SupritYoung/FaiMA.
pdf
bib
abs
MRC-based Nested Medical NER with Co-prediction and Adaptive Pre-training
Xiaojing Du
|
Hanjie Zhao
|
Danyan Xing
|
Yuxiang Jia
|
Hongying Zan
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In medical information extraction, medical Named Entity Recognition (NER) is indispensable, playing a crucial role in developing medical knowledge graphs, enhancing medical question-answering systems, and analyzing electronic medical records. The challenge in medical NER arises from the complex nested structures and sophisticated medical terminologies, distinguishing it from its counterparts in traditional domains. In response to these complexities, we propose a medical NER model based on Machine Reading Comprehension (MRC), which uses a task-adaptive pre-training strategy to improve the model’s capability in the medical field. Meanwhile, our model introduces multiple word-pair embeddings and multi-granularity dilated convolution to enhance the model’s representation ability and uses a combined predictor of Biaffine and MLP to improve the model’s recognition performance. Experimental evaluations conducted on the CMeEE, a benchmark for Chinese nested medical NER, demonstrate that our proposed model outperforms the compared state-of-the-art (SOTA) models.
2023
pdf
bib
A Corpus for Named Entity Recognition in Chinese Novels with Multi-genres
Hanjie Zhao
|
Jinge Xie
|
Yuchen Yan
|
Yuxiang Jia
|
Yawen Ye
|
Hongying Zan
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2022
pdf
bib
abs
MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation
Shuo Xu
|
Yuxiang Jia
|
Changyong Niu
|
Hongying Zan
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Emotion recognition in conversation is important for an empathetic dialogue system to understand the user’s emotion and then generate appropriate emotional responses. However, most previous researches focus on modeling conversational contexts primarily based on the textual modality or simply utilizing multimodal information through feature concatenation. In order to exploit multimodal information and contextual information more effectively, we propose a multimodal directed acyclic graph (MMDAG) network by injecting information flows inside modality and across modalities into the DAG architecture. Experiments on IEMOCAP and MELD show that our model outperforms other state-of-the-art models. Comparative studies validate the effectiveness of the proposed modality fusion method.
2021
pdf
bib
abs
融入篇章信息的文学作品命名实体识别(Document-level Literary Named Entity Recognition)
Yuxiang Jia (贾玉祥)
|
Rui Chao (晁睿)
|
Hongying Zan (昝红英)
|
Huayi Dou (窦华溢)
|
Shuai Cao (曹帅)
|
Shuo Xu (徐硕)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
命名实体识别是文学作品智能分析的基础性工作,当前文学领域命名实体识别的研究还较薄弱,一个主要的原因是缺乏标注语料。本文从金庸小说入手,对两部小说180余万字进行了命名实体的标注,共标注4类实体5万多个。针对小说文本的特点,本文提出融入篇章信息的命名实体识别模型,引入篇章字典保存汉字的历史状态,利用可信度计算融合BiGRU-CRF与Transformer模型。实验结果表明,利用篇章信息有效地提升了命名实体识别的效果。最后,我们还探讨了命名实体识别在小说社会网络构建中的应用。
2012
pdf
bib
A Comparison of Chinese Word Segmentation on News and Microblog Corpora with a Lexicon Based Method
Yuxiang Jia
|
Hongying Zan
|
Ming Fan
|
Zhimin Wang
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing
2010
pdf
bib
Chinese Word Sense Induction with Basic Clustering Algorithms
Yuxiang Jia
|
Shiwen Yu
|
Zhengyan Chen
CIPS-SIGHAN Joint Conference on Chinese Language Processing
2009
pdf
bib
Chinese Semantic Class Learning from Web Based on Concept-Level Characteristics
Wenbo Pang
|
Xiaozhong Fan
|
Jiangde Yu
|
Yuxiang Jia
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 1
pdf
bib
A Noisy Channel Model for Grapheme-based Machine Transliteration
Yuxiang Jia
|
Danqing Zhu
|
Shiwen Yu
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)
2008
pdf
bib
Unsupervised Chinese Verb Metaphor Recognition Based on Selectional Preferences
Yuxiang Jia
|
Shiwen Yu
Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation