Xinglu Chen
2025
DualReward: A Dynamic Reinforcement Learning Framework for Cloze Tests Distractor Generation
Tianyou Huang | Xinglu Chen | Jingshen Zhang | Xin Ying Qiu | Ruiying Niu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Tianyou Huang | Xinglu Chen | Jingshen Zhang | Xin Ying Qiu | Ruiying Niu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"This paper introduces DualReward, a novel reinforcement learning framework for automatic dis-tractor generation in cloze tests. Unlike conventional approaches that rely primarily on super-vised learning or static generative models, our method employs a dual reward structure with adaptive scaling that differentiates between human-created gold standard distractors and model-generated candidates. The framework dynamically adjusts reward signal intensity based on model performance and confidence. We evaluate our approach on both passage-level (CLOTH-F) and sentence-level (MCQ) cloze test datasets, demonstrating consistent improvements overstate-of-the-art baselines. Experimental results show that our adaptive reward scaling mechanism provides modest but consistent benefits on homogeneous datasets (CLOTH-F) and more substantial improvements (3.48-3.86% in P@1) on diverse, cross-domain data (MCQ), suggest-ing its particular effectiveness for handling varied question types and domains. Our work offers a flexible framework that effectively balances learning from reliable human examples while exploring novel, high-quality distractors for automated test generation."
2024
Multi-Error Modeling and Fluency-Targeted Pre-training for Chinese Essay Evaluation
Jingshen Zhang | Xiangyu Yang | Xinkai Su | Xinglu Chen | Tianyou Huang | Xinying Qiu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
Jingshen Zhang | Xiangyu Yang | Xinkai Su | Xinglu Chen | Tianyou Huang | Xinying Qiu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“This system report presents our approaches and results for the Chinese Essay Fluency Evaluation (CEFE) task at CCL-2024. For Track 1, we optimized predictions for challenging fine-grained error types using binary classification models and trained coarse-grained models on the Chinese Learner 4W corpus. In Track 2, we enhanced performance by constructing a pseudo-dataset with multiple error types per sentence. For Track 3, where we achieved first place, we generated fluency-rated pseudo-data via back-translation for pretraining and used an NSP-based strategy with Symmetric Cross Entropy loss to capture context and mitigate long dependencies. Our methods effectively address key challenges in Chinese Essay Fluency Evaluation.”
Readability-guided Idiom-aware Sentence Simplification (RISS) for Chinese
Jingshen Zhang | Xinglu Chen | Xinying Qiu | Zhimin Wang | Wenhe Feng
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
Jingshen Zhang | Xinglu Chen | Xinying Qiu | Zhimin Wang | Wenhe Feng
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Chinese sentence simplification faces challenges due to the lack of large-scale labeledparallel corpora and the prevalence of idioms. To address these challenges, we pro-pose Readability-guided Idiom-aware Sentence Simplification (RISS), a novel frameworkthat combines data augmentation techniques. RISS introduces two key components: (1)Readability-guided Paraphrase Selection (RPS), a method for mining high-quality sen-tence pairs, and (2) Idiom-aware Simplification (IAS), a model that enhances the compre-hension and simplification of idiomatic expressions. By integrating RPS and IAS usingmulti-stage and multi-task learning strategies, RISS outperforms previous state-of-the-artmethods on two Chinese sentence simplification datasets. Furthermore, RISS achievesadditional improvements when fine-tuned on a small labeled dataset. Our approachdemonstrates the potential for more effective and accessible Chinese text simplification.”