Maria Do Mar Vau
2023
Large Language Models respond to Influence like Humans
Lewis Griffin
|
Bennett Kleinberg
|
Maximilian Mozes
|
Kimberly Mai
|
Maria Do Mar Vau
|
Matthew Caldwell
|
Augustine Mavor-Parker
Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023)
Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement boosts a later truthfulness test rating. Analysis of newly collected data from human and LLM-simulated subjects (1000 of each) showed the same pattern of effects in both populations; although with greater per statement variability for the LLM. The second study concerns a specific mode of influence – populist framing of news to increase its persuasion and political mobilization. Newly collected data from simulated subjects was compared to previously published data from a 15 country experiment on 7286 human participants. Several effects from the human study were replicated by the simulated study, including ones that surprised the authors of the human study by contradicting their theoretical expectations; but some significant relationships found in human data were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.
Search
Co-authors
- Lewis Griffin 1
- Bennett Kleinberg 1
- Maximilian Mozes 1
- Kimberly Mai 1
- Matthew Caldwell 1
- show all...