Will LLMs Sink or Swim? Exploring Decision-Making Under Pressure

Kyusik Kim, Hyeonseok Jeon, Jeongwoo Ryu, Bongwon Suh


Abstract
Recent advancements in Large Language Models (LLMs) have demonstrated their ability to simulate human-like decision-making, yet the impact of psychological pressures on their decision-making processes remains underexplored. To understand how psychological pressures influence decision-making in LLMs, we tested LLMs on various high-level tasks, using both explicit and implicit pressure prompts. Moreover, we examined LLM responses under different personas to compare with human behavior under pressure. Our findings show that pressures significantly affect LLMs’ decision-making, varying across tasks and models. Persona-based analysis suggests some models exhibit human-like sensitivity to pressure, though with some variability. Furthermore, by analyzing both the responses and reasoning patterns, we identified the values LLMs prioritize under specific social pressures. These insights deepen our understanding of LLM behavior and demonstrate the potential for more realistic social simulation experiments.
Anthology ID:
2024.findings-emnlp.668
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11425–11450
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.668
DOI:
Bibkey:
Cite (ACL):
Kyusik Kim, Hyeonseok Jeon, Jeongwoo Ryu, and Bongwon Suh. 2024. Will LLMs Sink or Swim? Exploring Decision-Making Under Pressure. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 11425–11450, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Will LLMs Sink or Swim? Exploring Decision-Making Under Pressure (Kim et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.668.pdf