2024
pdf
bib
abs
Towards Better Utilization of Multi-Reference Training Data for Chinese Grammatical Error Correction
Yumeng Liu
|
Zhenghua Li
|
HaoChen Jiang
|
Bo Zhang
|
Chen Li
|
Ji Zhang
Findings of the Association for Computational Linguistics: ACL 2024
For the grammatical error correction (GEC) task, there usually exist multiple correction ways for an erroneous input sentence, leading to multiple references. Observing the high proportion of multi-reference instances in Chinese GEC training data, we target a systematic study on how to better utilize multi-reference training data. We propose two new approaches and a simple two-stage training strategy. We compare them against previously proposed approaches, on two Chinese training datasets, i.e., Lang-8 for second language learner texts and FCGEC-Train for native speaker texts, and three test datasets. The experiments and analyses demonstrate the effectiveness of our proposed approaches and reveal interesting insights. Our code is available at https://github.com/ymliucs/MrGEC.
2023
pdf
bib
abs
Improving Seq2Seq Grammatical Error Correction via Decoding Interventions
Houquan Zhou
|
Yumeng Liu
|
Zhenghua Li
|
Min Zhang
|
Bo Zhang
|
Chen Li
|
Ji Zhang
|
Fei Huang
Findings of the Association for Computational Linguistics: EMNLP 2023
The sequence-to-sequence (Seq2Seq) approach has recently been widely used in grammatical error correction (GEC) and shows promising performance. However, the Seq2Seq GEC approach still suffers from two issues. First, a Seq2Seq GEC model can only be trained on parallel data, which, in GEC task, is often noisy and limited in quantity. Second, the decoder of a Seq2Seq GEC model lacks an explicit awareness of the correctness of the token being generated. In this paper, we propose a unified decoding intervention framework that employs an external critic to assess the appropriateness of the token to be generated incrementally, and then dynamically influence the choice of the next token. We discover and investigate two types of critics: a pre-trained left-to-right language model critic and an incremental target-side grammatical error detector critic. Through extensive experiments on English and Chinese datasets, our framework consistently outperforms strong baselines and achieves results competitive with state-of-the-art methods.
pdf
bib
abs
CCL23-Eval任务7赛道一系统报告:Suda &Alibaba 文本纠错系统(CCL23-Eval Task 7 Track 1 System Report: Suda &Alibaba Team Text Error Correction System)
Haochen Jiang (蒋浩辰)
|
Yumeng Liu (刘雨萌)
|
Houquan Zhou (周厚全)
|
Ziheng Qiao (乔子恒)
|
Bo Zhang (波章,)
|
Chen Li (李辰)
|
Zhenghua Li (李正华)
|
Min Zhang (张民)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“本报告描述 Suda &Alibaba 纠错团队在 CCL2023 汉语学习者文本纠错评测任务的赛道一:多维度汉语学习者文本纠错(Multidimensional Chinese Learner Text Correc-tion)中提交的参赛系统。在模型方面,本队伍使用了序列到序列和序列到编辑两种纠错模型。在数据方面,本队伍分别使用基于混淆集构造的伪数据、Lang-8 真实数据以及 YACLC 开发集进行三阶段训练;在开放任务上还额外使用HSK、CGED等数据进行训练。本队伍还使用了一系列有效的性能提升技术,包括了基于规则的数据增强,数据清洗,后处理以及模型集成等 .除此之外,本队伍还在如何使用GPT3.5、GPT4等大模型来辅助中文文本纠错上进行了一些探索,提出了一种可以有效避免大模型过纠问题的方法,并尝试了多种 Prompt。在封闭和开放两个任务上,本队伍在最小改动、流利提升和平均 F0.5 得分上均位列第一。”