Shuyuan Zheng


2024

pdf bib
Shall We Team Up: Exploring Spontaneous Cooperation of Competing LLM Agents
Zengqing Wu | Run Peng | Shuyuan Zheng | Qianying Liu | Xu Han | Brian I. Kwon | Makoto Onizuka | Shaojie Tang | Chuan Xiao
Findings of the Association for Computational Linguistics: EMNLP 2024

Large Language Models (LLMs) have increasingly been utilized in social simulations, where they are often guided by carefully crafted instructions to stably exhibit human-like behaviors during simulations. Nevertheless, we doubt the necessity of shaping agents’ behaviors for accurate social simulations. Instead, this paper emphasizes the importance of spontaneous phenomena, wherein agents deeply engage in contexts and make adaptive decisions without explicit directions. We explored spontaneous cooperation across three competitive scenarios and successfully simulated the gradual emergence of cooperation, findings that align closely with human behavioral data. This approach not only aids the computational social science community in bridging the gap between simulations and real-world dynamics but also offers the AI community a novel method to assess LLMs’ capability of deliberate reasoning.Our source code is available at https://github.com/wuzengqing001225/SABM_ShallWeTeamUp