Ruiyi Wang


2024

pdf bib
PATIENT-๐œ“: Using Large Language Models to Simulate Patients for Training Mental Health Professionals
Ruiyi Wang | Stephanie Milani | Jamie C. Chiu | Jiayin Zhi | Shaun M. Eack | Travis Labrum | Samuel M Murphy | Nev Jones | Kate V Hardy | Hong Shen | Fei Fang | Zhiyu Chen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Mental illness remains one of the most critical public health issues. Despite its importance, many mental health professionals highlight a disconnect between their training and actual real-world patient practice. To help bridge this gap, we propose PATIENT-๐œ“, a novel patient simulation framework for cognitive behavior therapy (CBT) training. To build PATIENT-๐œ“, we construct diverse patient cognitive models based on CBT principles and use large language models (LLMs) programmed with these cognitive models to act as a simulated therapy patient. We propose an interactive training scheme, PATIENT-๐œ“-TRAINER, for mental health trainees to practice a key skill in CBT โ€“ formulating the cognitive model of the patient โ€“ through role-playing a therapy session with PATIENT-๐œ“. To evaluate PATIENT-๐œ“, we conducted a comprehensive user study of 13 mental health trainees and 20 experts. The results demonstrate that practice using PATIENT-๐œ“-TRAINER enhances the perceived skill acquisition and confidence of the trainees beyond existing forms of training such as textbooks, videos, and role-play with non-patients. Based on the expertsโ€™ perceptions, PATIENT-๐œ“ is perceived to be closer to real patient interactions than GPT-4, and PATIENT-๐œ“-TRAINER holds strong promise to improve trainee competencies. Our code and data are released at https://github.com/ruiyiw/patient-psi.

pdf bib
SOTOPIA-ฯ€: Interactive Learning of Socially Intelligent Language Agents
Ruiyi Wang | Haofei Yu | Wenxin Zhang | Zhengyang Qi | Maarten Sap | Yonatan Bisk | Graham Neubig | Hao Zhu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Humans learn social skills through both imitation and social interaction. This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-ฯ€, that improves the social intelligence of language agents. This method leverages behavior cloning and self-reinforcement based training on filtered social interaction data according to large language model (LLM) rating. We show that our training method allows a 7B LLM to reach the social goal completion ability of an expert model (GPT-4-based agent) without the loss of more generic abilities, such as the ability to answer knowledge-based questions. We also demonstrate that this training paradigm uncovers some weaknesses in standard evaluation and safety training paradigms that (1) LLM-based evaluation of social intelligence overestimates the abilities of the language agents trained specifically for social interaction, and that (2) despite not training for better safety or question answering (QA) ability, our methods improve the safety of language agents and maintain general QA ability on the MMLU benchmark.

2023

pdf bib
Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models
Simon Stepputtis | Joseph Campbell | Yaqi Xie | Zhengyang Qi | Wenxin Zhang | Ruiyi Wang | Sanketh Rangreji | Charles Lewis | Katia Sycara
Findings of the Association for Computational Linguistics: EMNLP 2023

Deception and persuasion play a critical role in long-horizon dialogues between multiple parties, especially when the interests, goals, and motivations of the participants are not aligned. Such complex tasks pose challenges for current Large Language Models (LLM) as deception and persuasion can easily mislead them, especially in long-horizon multi-party dialogues. To this end, we explore the game of Avalon: The Resistance, a social deduction game in which players must determine each otherโ€™s hidden identities to complete their teamโ€™s objective. We introduce an online testbed and a dataset containing 20 carefully collected and labeled games among human players that exhibit long-horizon deception in a cooperative-competitive setting. We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each playerโ€™s goal and motivation. Particularly, we discuss the multimodal integration of the chat between the players and the gameโ€™s state that grounds the conversation, providing further insights into the true player identities. We find that even current state-of-the-art LLMs do not reach human performance, making our dataset a compelling benchmark to investigate the decision-making and language-processing capabilities of LLMs. Our dataset and online testbed can be found at our project website: https://sstepput.github.io/Avalon-NLU/