Agam Goyal
2024
Simulating Opinion Dynamics with Networks of LLM-based Agents
Yun-Shiuan Chuang
|
Agam Goyal
|
Nikunj Harlalka
|
Siddharth Suresh
|
Robert Hawkins
|
Sijia Yang
|
Dhavan Shah
|
Junjie Hu
|
Timothy Rogers
Findings of the Association for Computational Linguistics: NAACL 2024
Accurately simulating human opinion dynamics is crucial for understanding a variety of societal phenomena, including polarization and the spread of misinformation. However, the agent-based models (ABMs) commonly used for such simulations often over-simplify human behavior. We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs). Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality. This bias limits their utility for understanding resistance to consensus views on issues like climate change. After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research. These insights highlight the promise and limitations of LLM agents in this domain and suggest a path forward: refining LLMs with real-world discourse to better simulate the evolution of human beliefs.
Beyond Demographics: Aligning Role-playing LLM-based Agents Using Human Belief Networks
Yun-Shiuan Chuang
|
Krirk Nirunwiroj
|
Zach Studdiford
|
Agam Goyal
|
Vincent Frigo
|
Sijia Yang
|
Dhavan Shah
|
Junjie Hu
|
Timothy Rogers
Findings of the Association for Computational Linguistics: EMNLP 2024
Creating human-like large language model (LLM) agents is crucial for faithful social simulation. Having LLMs role-play based on demographic information sometimes improves human likeness but often does not. This study assessed whether LLM alignment with human behavior can be improved by integrating information from empirically-derived human belief networks. Using data from a human survey, we estimated a belief network encompassing 64 topics loading on nine non-overlapping latent factors. We then seeded LLM-based agents with an opinion on one topic, and assessed the alignment of its expressed opinions on remaining test topics with corresponding human data. Role-playing based on demographic information alone did not align LLM and human opinions, but seeding the agent with a single belief greatly improved alignment for topics related in the belief network, and not for topics outside the network. These results suggest a novel path for human-LLM belief alignment in work seeking to simulate and understand patterns of belief distributions in society.
Search
Co-authors
- Yun-Shiuan Chuang 2
- Sijia Yang 2
- Dhavan Shah 2
- Junjie Hu 2
- Timothy Rogers 2
- show all...