Xinpeng Liu


2025

pdf bib
A Chain-of-Task Framework for Instruction Tuning of LLMs Based on Chinese Grammatical Error Correction
Xinpeng Liu | Bing Xu | Muyun Yang | Hailong Cao | Conghui Zhu | Tiejun Zhao | Wenpeng Lu
Proceedings of the 31st International Conference on Computational Linguistics

Over-correction is a critical issue for large language models (LLMs) to address Grammatical Error Correction (GEC) task, esp. for Chinese. This paper proposes a Chain-of-Task (CoTask) framework to reduce over-correction. The CoTask framework is applied as multi-task instruction tuning of LLMs by decomposing the process of grammatical error analysis to design auxiliary tasks and adjusting the types and combinations of training tasks. A supervised fine-tuning (SFT) strategy is also presented to enhance the performance of LLMs, together with an algorithm for automatic dataset annotation to avoid additional manual costs. Experimental results demonstrate that our method achieves new state-of-the-art results on both FCGEC (in-domain) and NaCGEC (out-of-domain) test sets.