Jiahui Zhao
2024
CMoralEval: A Moral Evaluation Benchmark for Chinese Large Language Models
Linhao Yu
|
Yongqi Leng
|
Yufei Huang
|
Shang Wu
|
Haixin Liu
|
Xinmeng Ji
|
Jiahui Zhao
|
Jinwang Song
|
Tingting Cui
|
Xiaoqing Cheng
|
Liutao Liutao
|
Deyi Xiong
Findings of the Association for Computational Linguistics: ACL 2024
What a large language model (LLM) would respond in ethically relevant context? In this paper, we curate a large benchmark CMoralEval for morality evaluation of Chinese LLMs. The data sources of CMoralEval are two-fold: 1) a Chinese TV program discussing Chinese moral norms with stories from the society and 2) a collection of Chinese moral anomies from various newspapers and academic papers on morality. With these sources, we aim to create a moral evaluation dataset characterized by diversity and authenticity. We develop a morality taxonomy and a set of fundamental moral principles that are not only rooted in traditional Chinese culture but also consistent with contemporary societal norms. To facilitate efficient construction and annotation of instances in CMoralEval, we establish a platform with AI-assisted instance generation to streamline the annotation process. These help us curate CMoralEval that encompasses both explicit moral scenarios (14,964 instances) and moral dilemma scenarios (15,424 instances), each with instances from different data sources. We conduct extensive experiments with CMoralEval to examine a variety of Chinese LLMs. Experiment results demonstrate that CMoralEval is a challenging benchmark for Chinese LLMs.
Search
Co-authors
- Linhao Yu 1
- Yongqi Leng 1
- Yufei Huang 1
- Shang Wu 1
- Haixin Liu 1
- show all...