大语言模型在中文文本纠错任务的评测(Evaluation of large language models for Chinese text error correction tasks)

Mu Lingling (穆玲玲), Wang Xiaoying (王晓盈), Cui Jiajia (崔佳佳)


Abstract
“大语言模型(Large Language Models,LLMs)在信息抽取、机器翻译等自然语言处理任务上的能力已被广泛评估,但是在文本纠错方面还主要局限于评价GPT的英文语法纠错能力 。中文文本纠错任务包括中文语法检测 (Chinese Grammatical Error Detection,CGED)和中文语法纠错(Chinese Error Correction,CGEC)两个子任务。本文使用提示的方法评估了国内外的主流大模型在中文语法检测和中文语法纠错任务上的能力。论文设计了不同的提示策略,对结果进行了整体和细粒度的分析。在NLPCC2018和CGED2018测试集上的实验结果表明,ERNIE-4和ChatGLM-4的中文文本纠错能力优于GPT-3.5-Turbo和LLaMa-2-7B-Chat,少样本思维链提示策略性能最优,对词序错误和拼写错误上纠正的准确率较高,说明大模型在低资源下具有较好的中文文本纠错能力。然而测试结果显示大模型的召回率比基线模型高至少14个百分点,说明大模型在中文文本纠错任务上存在过度校正的问题。”
Anthology ID:
2024.ccl-1.62
Volume:
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
Month:
July
Year:
2024
Address:
Taiyuan, China
Editors:
Maosong Sun, Jiye Liang, Xianpei Han, Zhiyuan Liu, Yulan He
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
790–806
Language:
Chinese
URL:
https://aclanthology.org/2024.ccl-1.62/
DOI:
Bibkey:
Cite (ACL):
Mu Lingling, Wang Xiaoying, and Cui Jiajia. 2024. 大语言模型在中文文本纠错任务的评测(Evaluation of large language models for Chinese text error correction tasks). In Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference), pages 790–806, Taiyuan, China. Chinese Information Processing Society of China.
Cite (Informal):
大语言模型在中文文本纠错任务的评测(Evaluation of large language models for Chinese text error correction tasks) (Lingling et al., CCL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.ccl-1.62.pdf