Decision-Oriented Dialogue for Human-AI Collaboration

Jessy Lin, Nicholas Tomlin, Jacob Andreas, Jason Eisner


Abstract
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions. We formalize three domains in which users face everyday decisions: (1) choosing an assignment of reviewers to conference papers, (2) planning a multi-step itinerary in a city, and (3) negotiating travel plans for a group of friends. In each of these settings, AI assistants and users have disparate abilities that they must combine to arrive at the best decision: Assistants can access and process large amounts of information, while users have preferences and constraints external to the system. For each task, we build a dialogue environment where agents receive a reward based on the quality of the final decision they reach. We evaluate LMs in self-play and in collaboration with humans and find that they fall short compared to human assistants, achieving much lower rewards despite engaging in longer dialogues. We highlight a number of challenges models face in decision-oriented dialogues, ranging from goal-directed behavior to reasoning and optimization, and release our environments as a testbed for future work.
Anthology ID:
2024.tacl-1.50
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
892–911
Language:
URL:
https://aclanthology.org/2024.tacl-1.50/
DOI:
10.1162/tacl_a_00679
Bibkey:
Cite (ACL):
Jessy Lin, Nicholas Tomlin, Jacob Andreas, and Jason Eisner. 2024. Decision-Oriented Dialogue for Human-AI Collaboration. Transactions of the Association for Computational Linguistics, 12:892–911.
Cite (Informal):
Decision-Oriented Dialogue for Human-AI Collaboration (Lin et al., TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.50.pdf