Kun Yue
2025
Overview of CCL25-Eval Task 7: Chinese Literary Language Understanding Evaluation (ZhengMing)
Kang Wang | Qing Wang | Min Peng | Kun Yue | Gang Hu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Kang Wang | Qing Wang | Min Peng | Kun Yue | Gang Hu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"The 24th Chinese Computational Linguistics Conference (CCL25-Eval) features 12 technical evaluation tasks. Among them, Task 7 is the Chinese Literary Language Understanding Evaluation (ZhengMing). ZhengMing is a universal and scalable evaluation framework designed to assess natural language processing (NLP) tasks in the literary domain, such as text classification, text generation, automated question answering, relation extraction, and machine translation.ZhengMing framework aims to evaluate the performance of large language models (LLMs) in the literary field at a fine-grained level. In this mission, 89 teams signed up for the competition, with5 teams ultimately submitting results. The highest score achieved is 0.65. This paper presents and discusses the dataset, task descriptions, competition results, and other relevant information for this evaluation task. This paper introduces and presents relevant information about this evaluation task, including the dataset, task description, and competition results. More details are available at https://github.com/isShayulajiao/CCL25-Eval-ZhengMing."