Yangsen Zhang


2024

pdf bib
DLM: A Decoupled Learning Model for Long-tailed Polyphone Disambiguation in Mandarin
Beibei Gao | Yangsen Zhang | Ga Xiang | Yushan Jiang
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Grapheme-to-phoneme conversion (G2P) is a critical component of the text-to-speech system (TTS), where polyphone disambiguation is the most crucial task. However, polyphone disambiguation datasets often suffer from the long-tail problem, and context learning for polyphonic characters commonly stems from a single dimension. In this paper, we propose a novel model DLM: a Decoupled Learning Model for long-tailed polyphone disambiguation in Mandarin. Firstly, DLM decouples representation and classification learnings. It can apply different data samplers for each stage to obtain an optimal training data distribution. This can mitigate the long-tail problem. Secondly, two improved attention mechanisms and a gradual conversion strategy are integrated into the DLM, which achieve transition learning of context from local to global. Finally, to evaluate the effectiveness of DLM, we construct a balanced polyphone disambiguation corpus via in-context learning. Experiments on the benchmark CPP dataset demonstrate that DLM achieves a boosted accuracy of 99.07%. Moreover, DLM improves the disambiguation performance of long-tailed polyphonic characters. For many long-tailed characters, DLM even achieves an accuracy of 100%.

2023

pdf bib
基于RoBERTa的中文仇恨言论侦测方法研究(Chinese Hate Speech detection method Based on RoBERTa-WWM)
Xiaojun Rao | Yangsen Zhang | Qilong Jia | Xueyang Liu | 晓俊 饶 | 仰森 张 | 爽 彭 | 启龙 贾 | 雪阳 刘
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“随着互联网的普及,社交媒体虽然提供了交流观点的平台,但因其虚拟性和匿名性也加剧了仇恨言论的传播,因此自动侦测仇恨言论对于维护社交媒体平台的文明发展至关重要。针对以上问题,构建了一个中文仇恨言论数据集CHSD,并提出了一种中文仇恨言论侦测模型RoBERTa-CHHSD。该模型首先采用RoBERTa预训练语言模型对中文仇恨言论进行序列化处理,提取文本特征信息;再分别接入TextCNN模型和Bi-GRU模型,提取多层次局部语义特征和句子间全局依赖关系信息;将二者结果融合来提取文本中更深层次的仇恨言论特征,对中文仇恨言论进行分类,从而实现中文仇恨言论的侦测。实验结果表明,本模型在CHSD数据集上的F1值为89.12%,与当前最优主流模型RoBERTa-WWM相比提升了1.76%。”

pdf bib
CCL23-Eval 任务7系统报告:基于序列标注和指针生成网络的语法纠错方法(System Report for CCL23-Eval Task 7:A Syntactic Error Correction Approach Based on Sequence Labeling and Pointer Generation Networks)
Youren Yu (于右任) | Yangsen Zhang (张仰森) | Guanguang Chang (畅冠光) | Beibei Gao (高贝贝) | Yushan Jiang (姜雨杉) | Tuo Xiao (肖拓)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)

“针对当前大多数中文语法纠错模型存在错误边界识别不准确以及过度纠正的问题,我们提出了一种基于序列标注与指针生成网络的中文语法纠错模型。首先,在数据方面,我们使用了官方提供的lang8数据集和历年的CGED数据集,并对该数据集进行了繁体转简体、数据清洗等操作。其次,在模型方面,我们采用了ERNIE+Global Pointer的序列标注模型、基于ERNIE+CRF的序列标注模型、基于BART+指针生成网络的纠错模型以及基于CECToR的纠错模型。最后,在模型集成方面,我们使用了投票和基于ERNIE模型计算困惑度的方法,来生成最终预测结果。根据测试集的结果,我们的乃乏乍指标达到了48.68,位居第二名。”

2010

pdf bib
A domain adaption Word Segmenter For Sighan Backoff 2010
Jiang Guo | Wenjie Su | Yangsen Zhang
CIPS-SIGHAN Joint Conference on Chinese Language Processing