Yutao Xie


2024

pdf bib
Metric-Free Learning Network with Dual Relations Propagation for Few-Shot Aspect Category Sentiment Analysis
Shiman Zhao | Yutao Xie | Wei Chen | Tengjiao Wang | Jiahui Yao | Jiabin Zheng
Transactions of the Association for Computational Linguistics, Volume 12

Few-shot Aspect Category Sentiment Analysis (ACSA) is a crucial task for aspect-based sentiment analysis, which aims to detect sentiment polarity for a given aspect category in a sentence with limited data. However, few-shot learning methods focus on distance metrics between the query and support sets to classify queries, heavily relying on aspect distributions in the embedding space. Thus, they suffer from overlapping distributions of aspect embeddings caused by irrelevant sentiment noise among sentences with multiple sentiment aspects, leading to misclassifications. To solve the above issues, we propose a metric-free method for few-shot ACSA, which models the associated relations among the aspects of support and query sentences by Dual Relations Propagation (DRP), addressing the passive effect of overlapping distributions. Specifically, DRP uses the dual relations (similarity and diversity) among the aspects of support and query sentences to explore intra-cluster commonality and inter-cluster uniqueness for alleviating sentiment noise and enhancing aspect features. Additionally, the dual relations are transformed from support-query to class-query to promote query inference by learning class knowledge. Experiments show that we achieve convincing performance on few-shot ACSA, especially an average improvement of 2.93% accuracy and 2.10% F1 score in the 3-way 1-shot setting.

2023

pdf bib
CCEval: A Representative Evaluation Benchmark for the Chinese-centric Multilingual Machine Translation
Lianzhang Lou | Xi Yin | Yutao Xie | Yang Xiang
Findings of the Association for Computational Linguistics: EMNLP 2023

The Chinese-centric Multilingual Machine Translation (MMT) has gained more importance recently due to increasing demands from international business development and cross-cultural exchanges. However, an important factor that limits the progress of this area is the lack of highly representative and high-quality evaluation benchmarks. To fill this gap, we propose CCEval, an impartial and representative Chinese-centric MMT evaluation dataset. This benchmark dataset consists of 2500 Chinese sentences we meticulously selected and processed, and covers more diverse linguistic features as compared to other MMT evaluation benchmarks. These sentences have been translated into 11 languages of various resource levels by professional translators via a rigorously controlled process pipeline to ensure their high quality. We conduct experiments to demonstrate our sampling methodology’s effectiveness in constructing evaluation datasets strongly correlated with human evaluations. The resulting dataset enables better assessments of the Chinese-centric MMT quality. Our CCEval benchmark dataset is available at https://bright.pcl.ac.cn/en/offlineTasks.

2022

pdf bib
BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model
Hongyi Yuan | Zheng Yuan | Ruyi Gan | Jiaxing Zhang | Yutao Xie | Sheng Yu
Proceedings of the 21st Workshop on Biomedical Language Processing

Pretrained language models have served as important backbones for natural language processing. Recently, in-domain pretraining has been shown to benefit various domain-specific downstream tasks. In the biomedical domain, natural language generation (NLG) tasks are of critical importance, while understudied. Approaching natural language understanding (NLU) tasks as NLG achieves satisfying performance in the general domain through constrained language generation or language prompting. We emphasize the lack of in-domain generative language models and the unsystematic generative downstream benchmarks in the biomedical domain, hindering the development of the research community. In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain. We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition. BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks. Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.