Xiongfeng Xiao
2024
Refining Corpora from a Model Calibration Perspective for Chinese Spelling Correction
Dingyao Yu
|
Yang An
|
Wei Ye
|
Xiongfeng Xiao
|
Shaoguang Mao
|
Tao Ge
|
Shikun Zhang
Findings of the Association for Computational Linguistics: ACL 2024
Chinese Spelling Correction (CSC) commonly lacks large-scale high-quality corpora, due to the labor-intensive labeling of spelling errors in real-life human writing or typing scenarios. Two data augmentation methods are widely adopted: (1) *Random Replacement* with the guidance of confusion sets and (2) *OCR/ASR-based Generation* that simulates character misusing. However, both methods inevitably introduce noisy data (e.g., false spelling errors), potentially leading to over-correction. By carefully analyzing the two types of corpora, we find that though the latter achieves more robust generalization performance, the former yields better-calibrated CSC models. We then provide a theoretical analysis of this empirical observation, based on which a corpus refining strategy is proposed. Specifically, OCR/ASR-based data samples are fed into a well-calibrated CSC model trained on random replacement-based corpora and then filtered based on prediction confidence. By learning a simple BERT-based model on the refined OCR/ASR-based corpus, we set up impressive state-of-the-art performance on three widely-used benchmarks, while significantly alleviating over-correction (e.g., lowering false positive predictions).
Search
Co-authors
- Dingyao Yu 1
- Yang An 1
- Wei Ye 1
- Shaoguang Mao 1
- Tao Ge 1
- show all...