ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models

Yuanyi Ren, Haoran Ye, Hanjun Fang, Xin Zhang, Guojie Song


Abstract
Large Language Models (LLMs) are transforming diverse fields and gaining increasing influence as human proxies. This development underscores the urgent need for evaluating value orientations and understanding of LLMs to ensure their responsible integration into public-facing applications. This work introduces ValueBench, the first comprehensive psychometric benchmark for evaluating value orientations and understanding in LLMs. ValueBench collects data from 44 established psychometric inventories, encompassing 453 multifaceted value dimensions. We propose an evaluation pipeline grounded in realistic human-AI interactions to probe value orientations, along with novel tasks for evaluating value understanding in an open-ended value space. With extensive experiments conducted on six representative LLMs, we unveil their shared and distinctive value orientations and exhibit their ability to approximate expert conclusions in value-related extraction and generation tasks.
Anthology ID:
2024.acl-long.111
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2015–2040
Language:
URL:
https://aclanthology.org/2024.acl-long.111
DOI:
Bibkey:
Cite (ACL):
Yuanyi Ren, Haoran Ye, Hanjun Fang, Xin Zhang, and Guojie Song. 2024. ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2015–2040, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models (Ren et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.111.pdf