Charles Lewis
2023
Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models
Simon Stepputtis
|
Joseph Campbell
|
Yaqi Xie
|
Zhengyang Qi
|
Wenxin Zhang
|
Ruiyi Wang
|
Sanketh Rangreji
|
Charles Lewis
|
Katia Sycara
Findings of the Association for Computational Linguistics: EMNLP 2023
Deception and persuasion play a critical role in long-horizon dialogues between multiple parties, especially when the interests, goals, and motivations of the participants are not aligned. Such complex tasks pose challenges for current Large Language Models (LLM) as deception and persuasion can easily mislead them, especially in long-horizon multi-party dialogues. To this end, we explore the game of Avalon: The Resistance, a social deduction game in which players must determine each other’s hidden identities to complete their team’s objective. We introduce an online testbed and a dataset containing 20 carefully collected and labeled games among human players that exhibit long-horizon deception in a cooperative-competitive setting. We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each player’s goal and motivation. Particularly, we discuss the multimodal integration of the chat between the players and the game’s state that grounds the conversation, providing further insights into the true player identities. We find that even current state-of-the-art LLMs do not reach human performance, making our dataset a compelling benchmark to investigate the decision-making and language-processing capabilities of LLMs. Our dataset and online testbed can be found at our project website: https://sstepput.github.io/Avalon-NLU/
Theory of Mind for Multi-Agent Collaboration via Large Language Models
Huao Li
|
Yu Chong
|
Simon Stepputtis
|
Joseph Campbell
|
Dana Hughes
|
Charles Lewis
|
Katia Sycara
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents. Our results reveal limitations in LLM-based agents’ planning optimization due to systematic failures in managing long-horizon contexts and hallucination about the task state. We explore the use of explicit belief state representations to mitigate these issues, finding that it enhances task performance and the accuracy of ToM inferences for LLM-based agents.
Search
Co-authors
- Simon Stepputtis 2
- Joseph P. Campbell 2
- Katia Sycara 2
- Yaqi Xie 1
- Zhengyang Qi 1
- show all...