Yang Di
2025
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations
Jiahao Zhao
|
Jingwei Zhu
|
Minghuan Tan
|
Min Yang
|
Renhao Li
|
Yang Di
|
Chenhao Zhang
|
Guancheng Ye
|
Chengming Li
|
Xiping Hu
|
Derek F. Wong
Proceedings of the 31st International Conference on Computational Linguistics
In this paper, we introduce a novel psychological benchmark, CPsyExam, constructed from questions sourced from Chinese examination systems. CPsyExam is designed to prioritize psychological knowledge and case analysis separately, recognizing the significance of applying psychological knowledge to real-world scenarios. We collect 22k questions from 39 psychology-related subjects across four Chinese examination systems. From the pool of 22k questions, we utilize 4k to create the benchmark that offers balanced coverage of subjects and incorporates a diverse range of case analysis techniques. Furthermore, we evaluate a range of existing large language models (LLMs), spanning from open-sourced to proprietary models. Our experiments and analysis demonstrate that CPsyExam serves as an effective benchmark for enhancing the understanding of psychology within LLMs and enables the comparison of LLMs across various granularities.
Search
Fix data
Co-authors
- Xiping Hu 1
- Renhao Li 1
- Chengming Li 1
- Minghuan Tan 1
- Derek F. Wong (黄辉) 1
- show all...