Yahui Liu


2023

pdf bib
CCL23-Eval 任务3系统报告:苏州大学CFSP系统(System Report for CCL23-Eval Task3: SUDA CFSP System)
Yahui Liu (刘亚慧) | Zhenghua Li (李正华) | Min Zhang (张民)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)

“本文介绍了我们在第二十二届中国计算语言学大会汉语框架语义解析评测中提交的参赛系统。框架语义解析是自然语言处理领域中重要的任务,其目标是从句子中提取框架语义结构。本次评测任务针对汉语框架语义的三个子任务(框架识别、论元范围识别和论元角色识别)使用不同的端到端框架进行解析,并利用数据增强和投票方法进一步提高预测的精度,最终,在A榜测试集上取得第二名,B榜测试集上取得第三名。”

2022

pdf bib
MuCPAD: A Multi-Domain Chinese Predicate-Argument Dataset
Yahui Liu | Haoping Yang | Chen Gong | Qingrong Xia | Zhenghua Li | Min Zhang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

During the past decade, neural network models have made tremendous progress on in-domain semantic role labeling (SRL). However, performance drops dramatically under the out-of-domain setting. In order to facilitate research on cross-domain SRL, this paper presents MuCPAD, a multi-domain Chinese predicate-argument dataset, which consists of 30,897 sentences and 92,051 predicates from six different domains. MuCPAD exhibits three important features. 1) Based on a frame-free annotation methodology, we avoid writing complex frames for new predicates. 2) We explicitly annotate omitted core arguments to recover more complete semantic structure, considering that omission of content words is ubiquitous in multi-domain Chinese texts. 3) We compile 53 pages of annotation guidelines and adopt strict double annotation for improving data quality. This paper describes in detail the annotation methodology and annotation process of MuCPAD, and presents in-depth data analysis. We also give benchmark results on cross-domain SRL based on MuCPAD.

pdf bib
Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics
Jiannan Xiang | Huayang Li | Yahui Liu | Lemao Liu | Guoping Huang | Defu Lian | Shuming Shi
Findings of the Association for Computational Linguistics: ACL 2022

Current practices in metric evaluation focus on one single dataset, e.g., Newstest dataset in each year’s WMT Metrics Shared Task. However, in this paper, we qualitatively and quantitatively show that the performances of metrics are sensitive to data. The ranking of metrics varies when the evaluation is conducted on different datasets. Then this paper further investigates two potential hypotheses, i.e., insignificant data points and the deviation of i.i.d assumption, which may take responsibility for the issue of data variance. In conclusion, our findings suggest that when evaluating automatic translation metrics, researchers should take data variance into account and be cautious to report the results on unreliable datasets, because it may leads to inconsistent results with most of the other datasets.

2021

pdf bib
Assessing Dialogue Systems with Distribution Distances
Jiannan Xiang | Yahui Liu | Deng Cai | Huayang Li | Defu Lian | Lemao Liu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2018

pdf bib
Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method
Yahui Liu | Wei Bi | Jun Gao | Xiaojiang Liu | Jian Yao | Shuming Shi
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in the conversation tasks, each query could have multiple responses, which forms a 1-to-n or m-to-n relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the common neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.