Linfeng Liu
2025
Driving Chinese Spelling Correction from a Fine-Grained Perspective
Linfeng Liu
|
Hongqiu Wu
|
Hai Zhao
Proceedings of the 31st International Conference on Computational Linguistics
This paper explores the task: Chinese spelling correction (CSC), from a fine-grained perspec- tive by recognizing that existing evaluations lack nuanced typology for the spelling errors. This deficiency can create a misleading impres- sion of model performance, incurring an “in- visible” bottleneck hindering the advancement of CSC research. In this paper, we first cate- gorize spelling errors into six types and con- duct a fine-grained evaluation across a wide variety of models, including BERT-based mod- els and LLMs. Thus, we are able to pinpoint the underlying weaknesses of existing state-of- the-art models - utilizing contextual clues and handling co-existence of multiple typos, asso- ciated to contextual errors and multi-typo er- rors. However, these errors occur infrequently in conventional training corpus. Therefore, we introduce new error generation methods to aug- ment their occurrence, which can be leveraged to enhance the training of CSC models. We hope this work could provide fresh insight for future CSC research.
2023
Empower Nested Boolean Logic via Self-Supervised Curriculum Learning
Hongqiu Wu
|
Linfeng Liu
|
Hai Zhao
|
Min Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Beyond the great cognitive powers showcased by language models, it is crucial to scrutinize whether their reasoning capabilities stem from strong generalization or merely exposure to relevant data. As opposed to constructing increasingly complex logic, this paper probes into the boolean logic, the root capability of a logical reasoner. We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested boolean logic, a task that humans can handle with ease. To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method Curriculum Logical Reasoning (Clr), where we augment the training data with nested boolean logic chain step-by-step, and program the training from simpler logical patterns gradually to harder ones. This new training paradigm allows language models to effectively generalize to much harder and longer-hop logic, which can hardly be learned through naive training. Furthermore, we show that boolean logic is a great foundation for improving the subsequent general logical tasks.