Yicheng Zhu
2025
Overview of CCL25-Eval Task6: Chinese Essay Rhetoric Recognition Evaluation (CERRE)
Yujiang Lu | Nuowei Liu | Yupei Ren | Yicheng Zhu | Man Lan | Xiaopeng Bai | Mofan Xu | Qingyu Liao
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Yujiang Lu | Nuowei Liu | Yupei Ren | Yicheng Zhu | Man Lan | Xiaopeng Bai | Mofan Xu | Qingyu Liao
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"Literary grace in Chinese composition writing is a hallmark of linguistic sophistication, often realized through various rhetorical devices. The automatic identification and analysis of rhetorical devices in essays play a crucial role in educational NLP applications, particularly for assessing writing proficiency and facilitating pedagogical interventions. Although prior research has predominantly focused on coarse-grained recognition of limited rhetorical devices at sentence level, these approaches prove inadequate for handling complex rhetorical structures and emerging educational demands. In this paper, we present the CCL25-Eval Task6: Chinese EssayRhetoric Recognition Evaluation (CERRE), a novel framework comprising three distinct evaluation tracks at the document level: (1) Fine-grained Form-level Categories Recognition, (2)Fine-grained Content-level Categories Recognition, and (3) Rhetorical Component Extraction.The evaluation has attracted 29 registered participating teams, with 8 teams submitting valid system outputs. In particular, two participating systems demonstrated superior performance by exceeding the baseline metrics in complete evaluation criteria."
Exploring the Application of 7B LLMs for Named Entity Recognition in Chinese Ancient Texts
Chenrui Zheng | Yicheng Zhu | Han Bi
Proceedings of the Second Workshop on Ancient Language Processing
Chenrui Zheng | Yicheng Zhu | Han Bi
Proceedings of the Second Workshop on Ancient Language Processing
This paper explores the application of fine-tuning methods based on 7B large language models (LLMs) for named entity recognition (NER) tasks in Chinese ancient texts. Targeting the complex semantics and domain-specific characteristics of ancient texts, particularly in Traditional Chinese Medicine (TCM) texts, we propose a comprehensive fine-tuning and pre-training strategy. By introducing multi-task learning, domain-specific pre-training, and efficient fine-tuning techniques based on LoRA, we achieved significant performance improvements in ancient text NER tasks. Experimental results show that the pre-trained and fine-tuned 7B model achieved an F1 score of 0.93, significantly outperforming general-purpose large language models.