Jinwang Song
2025
CCL25-Eval任务11系统报告:基于大模型微调的汉字硬笔书写质量自动评价
KongLulu KongLulu | Hongying Zan | Jinwang Song | LiuHaixin LiuHaixin | Yifan Li | Zhewei Luo
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
KongLulu KongLulu | Hongying Zan | Jinwang Song | LiuHaixin LiuHaixin | Yifan Li | Zhewei Luo
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
本技术报告探讨了通过微调本地视觉语言模型,实现汉字硬笔书写质量自动评价的技术方案。针对传统评价方法难以提供准确性反馈的问题,我们团队采用精心设计的prompt并结合微调的方式构建了一个高效的汉字硬笔书写质量自动评价系统。我们采用Qwen2.5-VL-7B-Instruct模型作为基础,通过LoRA微调技术实现了汉字书写质量等级分类(子任务一)和个性化评语生成(子任务二)的功能。系统地融合了视觉特征分析与语言生成能力,在训练过程中采用了梯度检查点、BF16混合精度训练等技术优化显存使用,并设计了针对性的损失函数和评估指标。实验结果表明,我们的方法能够有效实现汉字书写质量的细粒度评价。
CCL25-Eval任务1系统报告:使用思维链和投票集成增强大型语言模型空间语义理解
LiuHaixin LiuHaixin | Hongying Zan | Jinwang Song | Yifan Li | KongLulu KongLulu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
LiuHaixin LiuHaixin | Hongying Zan | Jinwang Song | Yifan Li | KongLulu KongLulu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"本技术报告详细介绍了我们团队在第五届空间语义理解评测(SpaCE2025)中的方法与成果。SpaCE2025 继续聚焦大语言模型在空间语义理解方面的能力评估,涵盖空间语言理解与空间推理两个核心维度,共设置五个子任务:空间信息正误判断、空间参照实体判断、空间异形同义判断、中文空间方位关系推理以及英文空间方位关系推理。我们通过设计结构化提示词并引入思维链推理机制,结合LoRA 微调技术和投票集成方法,有效提升了大语言模型在空间语义理解任务中的表现。在最终评测中,我们团队五个子任务的综合准确率为0.5983,整体排名第五。"
面向法律事件检测的大模型协同主动学习框架
Tingting Cui | Hongying Zan | Xinmeng Ji | Jinwang Song | Kunli Zhang | Yuxiang Jia
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Tingting Cui | Hongying Zan | Xinmeng Ji | Jinwang Song | Kunli Zhang | Yuxiang Jia
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"法律事件检测任务旨在识别并分类法律文本中的事件。然而,复杂的法律案件使得收集高质量标注数据面临巨大挑战。目前领域数据标注主要依赖人工,成本高昂且耗时。尽管传统的主动学习能够减少部分标注需求,但仍依赖于人工干预。大模型的发展为自动化数据标注带来了可能性,但如何确保标注的可靠性仍是亟待解决的问题。为此,本文提出了创新的协作训练范式,使用主动学习迭代选择训练数据,并利用大模型生成高质量标注,使用评估筛选机制保留高质量标注,大幅减少了人工标注的工作量。在两个事件检测基准数据集上的实验表明,该方法在低资源场景下显著降低了人工标注需求,在部分情况下可以接近监督学习的性能。"
JOLT-SQL: Joint Loss Tuning of Text-to-SQL with Confusion-aware Noisy Schema Sampling
Jinwang Song | Hongying Zan | Kunli Zhang | Lingling Mu | Yingjie Han | Haobo Hua | Min Peng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Jinwang Song | Hongying Zan | Kunli Zhang | Lingling Mu | Yingjie Han | Haobo Hua | Min Peng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Text-to-SQL, which maps natural language to SQL queries, has benefited greatly from recent advances in Large Language Models (LLMs). While LLMs offer various paradigms for this task, including prompting and supervised fine-tuning (SFT), SFT approaches still face challenges such as complex multi-stage pipelines and poor robustness to noisy schema information. To address these limitations, we present JOLT-SQL, a streamlined single-stage SFT framework that jointly optimizes schema linking and SQL generation via a unified loss. JOLT-SQL employs discriminative schema linking, enhanced by local bidirectional attention, alongside a confusion-aware noisy schema sampling strategy with selective attention to improve robustness under noisy schema conditions. Experiments on the Spider and BIRD benchmarks demonstrate that JOLT-SQL achieves state-of-the-art execution accuracy among comparable-size open-source models, while significantly improving both training and inference efficiency.
2024
Essay Rhetoric Recognition and Understanding Using Synthetic Data and Model Ensemble Enhanced Large Language Models
Jinwang Song | Hongying Zan | Kunli Zhang
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
Jinwang Song | Hongying Zan | Kunli Zhang
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“Natural language processing technology has been widely applied in the field of education. Essay writing serves as a crucial method for evaluating students’ language skills and logical thinking abilities. Rhetoric, an essential component of essay, is also a key reference for assessing writing quality. In the era of large language models (LLMs), applying LLMs to the tasks of automatic classification and extraction of rhetorical devices is of significant importance. In this paper, we fine-tune LLMs with specific instructions to adapt them for the tasks of recognizing and extracting rhetorical devices in essays. To further enhance the performance of LLMs, we experimented with multi-task fine-tuning and expanded the training dataset through synthetic data. Additionally, we explored a model ensemble approach based on label re-inference. Our method achieved a score of 66.29 in Task 6 of the CCL 2024 Eval, Chinese Essay Rhetoric Recognition and Understanding(CERRU), securing the first position.”
CMoralEval: A Moral Evaluation Benchmark for Chinese Large Language Models
Linhao Yu | Yongqi Leng | Yufei Huang | Shang Wu | Haixin Liu | Xinmeng Ji | Jiahui Zhao | Jinwang Song | Tingting Cui | Xiaoqing Cheng | Tao Liu | Deyi Xiong
Findings of the Association for Computational Linguistics: ACL 2024
Linhao Yu | Yongqi Leng | Yufei Huang | Shang Wu | Haixin Liu | Xinmeng Ji | Jiahui Zhao | Jinwang Song | Tingting Cui | Xiaoqing Cheng | Tao Liu | Deyi Xiong
Findings of the Association for Computational Linguistics: ACL 2024
What a large language model (LLM) would respond in ethically relevant context? In this paper, we curate a large benchmark CMoralEval for morality evaluation of Chinese LLMs. The data sources of CMoralEval are two-fold: 1) a Chinese TV program discussing Chinese moral norms with stories from the society and 2) a collection of Chinese moral anomies from various newspapers and academic papers on morality. With these sources, we aim to create a moral evaluation dataset characterized by diversity and authenticity. We develop a morality taxonomy and a set of fundamental moral principles that are not only rooted in traditional Chinese culture but also consistent with contemporary societal norms. To facilitate efficient construction and annotation of instances in CMoralEval, we establish a platform with AI-assisted instance generation to streamline the annotation process. These help us curate CMoralEval that encompasses both explicit moral scenarios (14,964 instances) and moral dilemma scenarios (15,424 instances), each with instances from different data sources. We conduct extensive experiments with CMoralEval to examine a variety of Chinese LLMs. Experiment results demonstrate that CMoralEval is a challenging benchmark for Chinese LLMs.
OpenEval: Benchmarking Chinese LLMs across Capability, Alignment and Safety
Chuang Liu | Linhao Yu | Jiaxuan Li | Renren Jin | Yufei Huang | Ling Shi | Junhui Zhang | Xinmeng Ji | Tingting Cui | Tao Liu | Jinwang Song | Hongying Zan | Sun Li | Deyi Xiong
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Chuang Liu | Linhao Yu | Jiaxuan Li | Renren Jin | Yufei Huang | Ling Shi | Junhui Zhang | Xinmeng Ji | Tingting Cui | Tao Liu | Jinwang Song | Hongying Zan | Sun Li | Deyi Xiong
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
The rapid development of Chinese large language models (LLMs) poses big challenges for efficient LLM evaluation. While current initiatives have introduced new benchmarks or evaluation platforms for assessing Chinese LLMs, many of these focus primarily on capabilities, usually overlooking potential alignment and safety issues. To address this gap, we introduce OpenEval, an evaluation testbed that benchmarks Chinese LLMs across capability, alignment and safety. For capability assessment, we include 12 benchmark datasets to evaluate Chinese LLMs from 4 sub-dimensions: NLP tasks, disciplinary knowledge, commonsense reasoning and mathematical reasoning. For alignment assessment, OpenEval contains 7 datasets that examines the bias, offensiveness and illegalness in the outputs yielded by Chinese LLMs. To evaluate safety, especially anticipated risks (e.g., power-seeking, self-awareness) of advanced LLMs, we include 6 datasets. In addition to these benchmarks, we have implemented a phased public evaluation and benchmark update strategy to ensure that OpenEval is in line with the development of Chinese LLMs or even able to provide cutting-edge benchmark datasets to guide the development of Chinese LLMs. In our first public evaluation, we have tested a range of Chinese LLMs, spanning from 7B to 72B parameters, including both open-source and proprietary models. Evaluation results indicate that while Chinese LLMs have shown impressive performance in certain tasks, more attention should be directed towards broader aspects such as commonsense reasoning, alignment, and safety.
Search
Fix author
Co-authors
- Hongying Zan (昝红英) 6
- Tingting Cui 3
- Xinmeng Ji 3
- Kunli Zhang 3
- Yufei Huang 2
- KongLulu KongLulu 2
- Yifan Li 2
- Tao Liu 2
- LiuHaixin LiuHaixin 2
- Deyi Xiong (德意 熊) 2
- Linhao Yu 2
- Xiaoqing Cheng 1
- Yingjie Han 1
- Haobo Hua 1
- Yuxiang Jia (贾玉祥) 1
- Renren Jin 1
- Yongqi Leng 1
- Jiaxuan Li 1
- Sun Li 1
- Haixin Liu 1
- Chuang Liu 1
- Zhewei Luo 1
- Lingling Mu (穆玲玲) 1
- Min Peng 1
- Ling Shi 1
- Shang Wu 1
- Junhui Zhang 1
- Jiahui Zhao 1