James Rehg
2023
Werewolf Among Us: Multimodal Resources for Modeling Persuasion Behaviors in Social Deduction Games
Bolin Lai
|
Hongxin Zhang
|
Miao Liu
|
Aryan Pariani
|
Fiona Ryan
|
Wenqi Jia
|
Shirley Anugrah Hayati
|
James Rehg
|
Diyi Yang
Findings of the Association for Computational Linguistics: ACL 2023
Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset can be found at https://persuasion-deductiongame. socialai-data.org. The codes and models are available at https://github.com/SALT-NLP/PersuationGames.
2020
Where Are You? Localization from Embodied Dialog
Meera Hahn
|
Jacob Krantz
|
Dhruv Batra
|
Devi Parikh
|
James Rehg
|
Stefan Lee
|
Peter Anderson
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
We present WHERE ARE YOU? (WAY), a dataset of ~6k dialogs in which two humans – an Observer and a Locator – complete a cooperative localization task. The Observer is spawned at random in a 3D environment and can navigate from first-person views while answering questions from the Locator. The Locator must localize the Observer in a detailed top-down map by asking questions and giving instructions. Based on this dataset, we define three challenging tasks: Localization from Embodied Dialog or LED (localizing the Observer from dialog history), Embodied Visual Dialog (modeling the Observer), and Cooperative Localization (modeling both agents). In this paper, we focus on the LED task – providing a strong baseline model with detailed ablations characterizing both dataset biases and the importance of various modeling choices. Our best model achieves 32.7% success at identifying the Observer’s location within 3m in unseen buildings, vs. 70.4% for human Locators.
Search
Co-authors
- Bolin Lai 1
- Hongxin Zhang 1
- Miao Liu 1
- Aryan Pariani 1
- Fiona Ryan 1
- show all...