LLMs Simulate Big5 Personality Traits: Further Evidence

Aleksandra Sorokovikova, Sharwin Rezagholi, Natalia Fedorova, Ivan Yamshchikov


Abstract
An empirical investigation into the simulation of the Big5 personality traits by large language models (LLMs), namely Llama-2, GPT-4, and Mixtral, is presented. We analyze the personality traits simulated by these models and their stability. This contributes to the broader understanding of the capabilities of LLMs to simulate personality traits and the respective implications for personalized human-computer interaction.
Anthology ID:
2024.personalize-1.7
Volume:
Proceedings of the 1st Workshop on Personalization of Generative AI Systems (PERSONALIZE 2024)
Month:
March
Year:
2024
Address:
St. Julians, Malta
Editors:
Ameet Deshpande, EunJeong Hwang, Vishvak Murahari, Joon Sung Park, Diyi Yang, Ashish Sabharwal, Karthik Narasimhan, Ashwin Kalyan
Venues:
PERSONALIZE | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
83–87
Language:
URL:
https://aclanthology.org/2024.personalize-1.7
DOI:
Bibkey:
Cite (ACL):
Aleksandra Sorokovikova, Sharwin Rezagholi, Natalia Fedorova, and Ivan Yamshchikov. 2024. LLMs Simulate Big5 Personality Traits: Further Evidence. In Proceedings of the 1st Workshop on Personalization of Generative AI Systems (PERSONALIZE 2024), pages 83–87, St. Julians, Malta. Association for Computational Linguistics.
Cite (Informal):
LLMs Simulate Big5 Personality Traits: Further Evidence (Sorokovikova et al., PERSONALIZE-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.personalize-1.7.pdf