Xinfeng Yuan
2024
Evaluating Character Understanding of Large Language Models via Character Profiling from Fictional Works
Xinfeng Yuan
|
Siyu Yuan
|
Yuhan Cui
|
Tianhe Lin
|
Xintao Wang
|
Rui Xu
|
Jiangjie Chen
|
Deqing Yang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have demonstrated impressive performance and spurred numerous AI applications, in which role-playing agents (RPAs) are particularly popular, especially for fictional characters. The prerequisite for these RPAs lies in the capability of LLMs to understand characters from fictional works. Previous efforts have evaluated this capability via basic classification tasks or characteristic imitation, failing to capture the nuanced character understanding with LLMs. In this paper, we propose evaluating LLMs’ character understanding capability via the character profiling task, i.e., summarizing character profiles from corresponding materials, a widely adopted yet understudied practice for RPA development. Specifically, we construct the CROSS dataset from literature experts and assess the generated profiles by comparing them with ground truth references and evaluating their applicability in downstream tasks. Our experiments, which cover various summarization methods and LLMs, have yielded promising results. These results strongly validate the character understanding capability of LLMs. Resources are available at https://github.com/Joanna0123/character_profiling.
Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data
Yiting Ran
|
Xintao Wang
|
Rui Xu
|
Xinfeng Yuan
|
Jiaqing Liang
|
Yanghua Xiao
|
Deqing Yang
Findings of the Association for Computational Linguistics: EMNLP 2024
Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia. While existing RPAs well portray the characters’ knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs). In this paper, we propose to enhance RPLMs via personality-indicative data. Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters. Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
Search
Co-authors
- Xintao Wang 2
- Rui Xu 2
- Deqing Yang 2
- Siyu Yuan 1
- Yuhan Cui 1
- show all...