Haoran Ye
2025
人机价值观驱动的对话情绪生成模型
Zhiqiang Ma | Haoran Ye | Jia Liu | Kai Lv
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Zhiqiang Ma | Haoran Ye | Jia Liu | Kai Lv
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"对话系统情绪生成任务旨在生成待回复话语的情绪类别。针对现有情绪生成模型忽视了用户与模型价值观一致性对情绪生成的调节与引导作用,导致对话系统生成情绪与用户期望情绪之间存在偏差,降低了对话系统与用户之间的情绪共鸣。本文提出一种人机价值观驱动的对话情绪生成模型-HVDEGM,通过多阶段的门控机制动态引入用户价值观特征来引导情绪生成。该模型基于价值观一致性原理,设计了三个单元。首先情境修正注意力单元通过两次注意力机制增强了情绪与语义特征信息,其次价值观融合单元通过多阶段融合门控动态平衡了用户价值观特征与对话系统历史价值观特征的权重,最后反应调节单元通过双向注意力与交叉注意力机制,强化了情绪、语义、价值观特征之间的互补关联。模型在新构建的价值观对话数据集ValueCon上进行实验,实验结果表明,HVDEGM相比DialogueRNN、DialogueGCN等基线模型在Precision、Recall、F1及情绪共鸣度等指标分别提升了2.9%、2.5%、0.9%和4.1%,证明了所提出方法的有效性。"
Generative Psycho-Lexical Approach for Constructing Value Systems in Large Language Models
Haoran Ye | TianZe Zhang | Yuhang Xie | Liyuan Zhang | Yuanyi Ren | Xin Zhang | Guojie Song
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Haoran Ye | TianZe Zhang | Yuhang Xie | Liyuan Zhang | Yuanyi Ren | Xin Zhang | Guojie Song
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Values are core drivers of individual and collective perception, cognition, and behavior. Value systems, such as Schwartz’s Theory of Basic Human Values, delineate the hierarchy and interplay among these values, enabling cross-disciplinary investigations into decision-making and societal dynamics. Recently, the rise of Large Language Models (LLMs) has raised concerns regarding their elusive intrinsic values. Despite growing efforts in evaluating, understanding, and aligning LLM values, a psychologically grounded LLM value system remains underexplored. This study addresses the gap by introducing the Generative Psycho-Lexical Approach (GPLA), a scalable, adaptable, and theoretically informed method for constructing value systems. Leveraging GPLA, we propose a psychologically grounded five-factor value system tailored for LLMs. For systematic validation, we present three benchmarking tasks that integrate psychological principles with cutting-edge AI priorities. Our results reveal that the proposed value system meets standard psychological criteria, better captures LLM values, improves LLM safety prediction, and enhances LLM alignment, when compared to the canonical Schwartz’s values.
2024
ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models
Yuanyi Ren | Haoran Ye | Hanjun Fang | Xin Zhang | Guojie Song
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Yuanyi Ren | Haoran Ye | Hanjun Fang | Xin Zhang | Guojie Song
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) are transforming diverse fields and gaining increasing influence as human proxies. This development underscores the urgent need for evaluating value orientations and understanding of LLMs to ensure their responsible integration into public-facing applications. This work introduces ValueBench, the first comprehensive psychometric benchmark for evaluating value orientations and understanding in LLMs. ValueBench collects data from 44 established psychometric inventories, encompassing 453 multifaceted value dimensions. We propose an evaluation pipeline grounded in realistic human-AI interactions to probe value orientations, along with novel tasks for evaluating value understanding in an open-ended value space. With extensive experiments conducted on six representative LLMs, we unveil their shared and distinctive value orientations and exhibit their ability to approximate expert conclusions in value-related extraction and generation tasks.