Aayush Bothra
2025
Can LLMs Express Personality Across Cultures? Introducing CulturalPersonas for Evaluating Trait Alignment
Priyanka Dey
|
Aayush Bothra
|
Yugal Khanter
|
Jieyu Zhao
|
Emilio Ferrara
Findings of the Association for Computational Linguistics: EMNLP 2025
As LLMs become central to interactive applications, ranging from tutoring to mental health, the ability to express personality in culturally appropriate ways is increasingly important. While recent works have explored personality evaluation of LLMs, they largely overlook the interplay between culture and personality. To address this, we introduce , the first large-scale benchmark with human validation for evaluating LLMs’ personality expression in culturally grounded, behaviorally rich contexts. Our dataset spans 3,000 scenario-based questions across six diverse countries, designed to elicit personality through everyday scenarios rooted in local values. We evaluate how closely three models’ personality distributions align to real human populations through two evaluation settings: multiple-choice and open-ended response formats. Our results show– improves alignment with country-specific human personality distributions (over a 20% reduction in Wasserstein distance across models and countries) and elicits more expressive, culturally coherent outputs compared to existing benchmarks. surfaces meaningful modulate trait outputs in response to culturally grounded prompts, offering new directions for aligning LLMs to global norms of behavior. By bridging personality expression and cultural nuance, we envision that will pave the way for more socially intelligent and globally adaptive LLMs. Datasets and code are available at: https://github.com/limenlp/CulturalPersonas.