Xeron Du
2026
COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values
Siwei Wu | JinCheng Ren | Xeron Du | Shuyue Guo | Xingwei Qu | Yiming Liang | Jie Liu | Yunwen Li | Tyler Loakman | Tianyu Zheng | Boyu Feng | Huaqing Yuan | Zili Wang | Jiaheng Liu | Wenhao Huang | Chenglin Cai | Haoran Que | Jian Yang | Yuelin Bai | Zekun Moore Wang | Zhouliang Yu | Qunshu Lin | Ding Pan | Yuchen Eleanor Jiang | Tiannan Wang | Wangchunshu Zhou | Shenzhi Wang | Xingyuan Bu | Minghao Liu | Guoyin Wang | Ge Zhang | Chenghua Lin
Findings of the Association for Computational Linguistics: EACL 2026
Siwei Wu | JinCheng Ren | Xeron Du | Shuyue Guo | Xingwei Qu | Yiming Liang | Jie Liu | Yunwen Li | Tyler Loakman | Tianyu Zheng | Boyu Feng | Huaqing Yuan | Zili Wang | Jiaheng Liu | Wenhao Huang | Chenglin Cai | Haoran Que | Jian Yang | Yuelin Bai | Zekun Moore Wang | Zhouliang Yu | Qunshu Lin | Ding Pan | Yuchen Eleanor Jiang | Tiannan Wang | Wangchunshu Zhou | Shenzhi Wang | Xingyuan Bu | Minghao Liu | Guoyin Wang | Ge Zhang | Chenghua Lin
Findings of the Association for Computational Linguistics: EACL 2026
Existing Chinese preference datasets suffer from limited scale, restricted domain coverage, and insufficiently rigorous data validation. Human annotation significantly limits the scalability of human preference datasets. As a result, Chinese Alignment and Chinese Reward Models (CRM) have not yet been thoroughly explored. To address these challenges, we design an LLM-based data annotation pipeline with no human intervention. Based on this pipeline, we curate COIG-P (Chinese Open Instruction Generalist - Preference), a high-quality, large-scale Chinese preference dataset consisting of 1M Chinese preference pairs and 92k carefully curated Chinese queries across diverse domains, including Chat, Coding, Maths, and others. We conduct experiments to verify the quality of COIG-P from two perspectives. (1) COIG-P brings significant performance improvements for the Qwen2/2.5 and Infinity-Instruct model series on AlignBench through DPO, with gains ranging from 2% to 12%. Furthermore, it significantly outperforms other existing Chinese preference datasets. (2) We train an 8B-sized CRM and manually annotate a Chinese Reward Benchmark (CRBench). Our CRM demonstrates robust scoring ability on CRBench. In addition, in practical data construction experiments, the quality of the data constructed by our CRM is comparable to that produced by GPT-4o.
2025
COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Yuelin Bai | Xeron Du | Yiming Liang | Leo Jin | Junting Zhou | Ziqiang Liu | Feiteng Fang | Mingshan Chang | Tianyu Zheng | Xincheng Zhang | Nuo Ma | Zekun Moore Wang | Ruibin Yuan | Haihong Wu | Hongquan Lin | Wenhao Huang | Jiajun Zhang | Chenghua Lin | Jie Fu | Min Yang | Shiwen Ni | Ge Zhang
Findings of the Association for Computational Linguistics: NAACL 2025
Yuelin Bai | Xeron Du | Yiming Liang | Leo Jin | Junting Zhou | Ziqiang Liu | Feiteng Fang | Mingshan Chang | Tianyu Zheng | Xincheng Zhang | Nuo Ma | Zekun Moore Wang | Ruibin Yuan | Haihong Wu | Hongquan Lin | Wenhao Huang | Jiajun Zhang | Chenghua Lin | Jie Fu | Min Yang | Shiwen Ni | Ge Zhang
Findings of the Association for Computational Linguistics: NAACL 2025
Remarkable progress on large language models (LLMs), particularly in English, has facilitated impressive capabilities in following human instructions. However, there remains a noticeable gap in instruction fine-tuning for Chinese, where the complex linguistic features pose significant challenges. Existing datasets, generally distilled from English-centric LLMs, are not well-aligned with Chinese users’ interaction patterns. To bridge this gap, we introduce COIG-CQIA, a new Chinese instruction tuning dataset derived from various real-world data resources and undergoing comprehensive human verification. We conduct extensive experiments on COIG-CQIA, and compare them with strong baseline models and datasets. The experimental results show that models trained on COIG-CQIA achieve highly competitive performance in diverse benchmarks. Additionally, our findings offer several insights for designing effective Chinese instruction-tuning datasets and data mixing strategies. Our dataset are available at https://huggingface.co/datasets/m-a-p/COIG-CQIA.
Quantification of Large Language Model Distillation
Sunbowen Lee | Junting Zhou | Chang Ao | Kaige Li | Xeron Du | Sirui He | Haihong Wu | Tianci Liu | Jiaheng Liu | Hamid Alinejad-Rokny | Min Yang | Yitao Liang | Zhoufutu Wen | Shiwen Ni
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Sunbowen Lee | Junting Zhou | Chang Ao | Kaige Li | Xeron Du | Sirui He | Haihong Wu | Tianci Liu | Jiaheng Liu | Hamid Alinejad-Rokny | Min Yang | Yitao Liang | Zhoufutu Wen | Shiwen Ni
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Model distillation is a fundamental technique in building large language models (LLMs), transferring knowledge from a teacher model to a student model. However, distillation can lead to model homogenization, reducing diversity among models and impairing their ability to robustly handle complex or novel tasks. These limitations underscore the need to systematically quantify the distillation process and its impact. In this work, we propose a framework to evaluate and quantify model distillation. Our method addresses two key aspects: (1) Identifying identity cognition contradictions to assess discrepancies in how models perceive and represent identity-related information, and (2) Analyzing multi-granularity response similarities across models to measure the extent of homogenization. Experimental results demonstrate two key insights: (1) Well-known closed-source and open-source LLMs usually exhibit high distillation degrees, except for Claude, Doubao, and Gemini. (2) Base LLMs show higher distillation degrees compared to aligned LLMs. By offering a systematic approach to improve the transparency of LLM data distillation, we call for LLMs with more independent development and more transparent technical reports to improve LLMs’ robustness and safety. The code and data are available at https://github.com/Aegis1863/LLMs-Distillation-Quantification.
Can MLLMs Understand the Deep Implication Behind Chinese Images?
Chenhao Zhang | Xi Feng | Yuelin Bai | Xeron Du | Jinchang Hou | Kaixin Deng | Guangzeng Han | Qinrui Li | Bingli Wang | Jiaheng Liu | Xingwei Qu | Yifei Zhang | Qixuan Zhao | Yiming Liang | Ziqiang Liu | Feiteng Fang | Min Yang | Wenhao Huang | Chenghua Lin | Ge Zhang | Shiwen Ni
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Chenhao Zhang | Xi Feng | Yuelin Bai | Xeron Du | Jinchang Hou | Kaixin Deng | Guangzeng Han | Qinrui Li | Bingli Wang | Jiaheng Liu | Xingwei Qu | Yifei Zhang | Qixuan Zhao | Yiming Liang | Ziqiang Liu | Feiteng Fang | Min Yang | Wenhao Huang | Chenghua Lin | Ge Zhang | Shiwen Ni
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
As the capabilities of Multimodal Large Language Models (MLLMs) improve, the need for higher-order evaluation of them is increasing. However, there is a lack of work evaluating MLLM for higher-order perception and understanding of Chinese visual content. To address this, we introduce the CII-Bench, which aims to assess MLLMs’ such capabilities for Chinese images. To ensure the authenticity of the Chinese context, images in CII-Bench are sourced from the Chinese Internet and manually reviewed, with corresponding answers also manually crafted. Additionally, CII-Bench incorporates images that represent Chinese traditional culture, such as famous Chinese traditional paintings, which can deeply reflect the model’s understanding of Chinese traditional culture. Through experiments on multiple MLLMs using CII-Bench, significant findings emerged. There is a large gap between MLLMs and humans in performance. The highest MLLM accuracy is 64.4%, while the human average is 78.2% and the peak is 81.0%. MLLMs perform poorly on traditional culture images, indicating limitations in understanding high-level semantics and lacking a deep knowledge base of Chinese traditional culture. Moreover, most models have higher accuracy when image emotion hints are added to the prompts. We believe CII-Bench will help MLLMs better understand Chinese semantics and specific images, and move forward the development of expert artificial general intelligence (AGI). Our project is publicly available at https://cii-bench.github.io.
KARPA: A Training-free Method of Adapting Knowledge Graph as References for Large Language Model’s Reasoning Path Aggregation
Siyuan Fang | Kaijing Ma | Tianyu Zheng | Xeron Du | Ningxuan Lu | Ge Zhang | Qingkun Tang
Findings of the Association for Computational Linguistics: ACL 2025
Siyuan Fang | Kaijing Ma | Tianyu Zheng | Xeron Du | Ningxuan Lu | Ge Zhang | Qingkun Tang
Findings of the Association for Computational Linguistics: ACL 2025
Large language models (LLMs) demonstrate exceptional performance across a variety of tasks, yet they are often affected by hallucinations and the timeliness of knowledge. Leveraging knowledge graphs (KGs) as external knowledge sources has emerged as a viable solution, but existing methods for LLM-based knowledge graph question answering (KGQA) are often limited by step-by-step decision-making on KGs, restricting the global planning and reasoning capabilities of LLMs, or they require fine-tuning or pre-training on specific KGs. To address these challenges, we propose Knowledge graph Assisted Reasoning Path Aggregation (KARPA), a novel framework that harnesses the global planning abilities of LLMs for efficient and accurate KG reasoning. KARPA operates in three steps: pre-planning relation paths using the LLM’s global planning capabilities, matching semantically relevant paths via an embedding model, and reasoning over these paths to generate answers. Unlike existing KGQA methods, KARPA avoids stepwise traversal, requires no additional training, and is adaptable to various LLM architectures. Extensive experimental results show that KARPA achieves state-of-the-art performance in KGQA tasks, delivering both high efficiency and accuracy.
Search
Fix author
Co-authors
- Ge Zhang 4
- Yuelin Bai 3
- Wenhao Huang 3
- Yiming Liang 3
- Chenghua Lin 3
- Jiaheng Liu 3
- Shiwen Ni 3
- Min Yang 3
- Tianyu Zheng 3
- Feiteng Fang 2
- Ziqiang Liu 2
- Xingwei Qu 2
- Zekun Moore Wang 2
- Haihong Wu 2
- Junting Zhou 2
- Hamid Alinejad-Rokny 1
- Chang Ao 1
- Xingyuan Bu 1
- Chenglin Cai 1
- Mingshan Chang 1
- Kaixin Deng 1
- Siyuan Fang 1
- Boyu Feng 1
- Xi Feng 1
- Jie Fu 1
- Shuyue Guo 1
- Guangzeng Han 1
- Sirui He 1
- Jinchang Hou 1
- Yuchen Eleanor Jiang 1
- Leo Jin 1
- Sunbowen Lee 1
- Yunwen Li 1
- Kaige Li 1
- Qinrui Li 1
- Yitao Liang 1
- Hongquan Lin 1
- Qunshu Lin 1
- Jie Liu 1
- Minghao Liu 1
- Tianci Liu 1
- Tyler Loakman 1
- Ningxuan Lu 1
- Nuo Ma 1
- Kaijing Ma 1
- Ding Pan 1
- Haoran Que 1
- JinCheng Ren 1
- Qingkun Tang 1
- Zili Wang 1
- Tiannan Wang 1
- Shenzhi Wang 1
- Guoyin Wang 1
- Bingli Wang 1
- Zhoufutu Wen 1
- Siwei Wu 1
- Jian Yang 1
- Zhouliang Yu 1
- Ruibin Yuan 1
- Huaqing Yuan 1
- Xincheng Zhang 1
- Jiajun Zhang 1
- Chenhao Zhang 1
- Yifei Zhang 1
- Qixuan Zhao 1
- Wangchunshu Zhou 1