Kathleen Eberhard


2016

Situated dialogue systems that interact with humans as part of a team (e.g., robot teammates) need to be able to use information from communication channels to gauge the coordination level and effectiveness of the team. Currently, the feasibility of this end goal is limited by several gaps in both the empirical and computational literature. The purpose of this paper is to address those gaps in the following ways: (1) investigate which properties of task-oriented discourse correspond with effective performance in human teams, and (2) discuss how and to what extent these properties can be utilized in spoken dialogue systems. To this end, we analyzed natural language data from a unique corpus of spontaneous, task-oriented dialogue (CReST corpus), which was annotated for disfluencies and conversational moves. We found that effective teams made more self-repair disfluencies and used specific communication strategies to facilitate grounding and coordination. Our results indicate that truly robust and natural dialogue systems will need to interpret highly disfluent utterances and also utilize specific collaborative mechanisms to facilitate grounding. These data shed light on effective communication in performance scenarios and directly inform the development of robust dialogue systems for situated artificial agents.

2013

Situated dialogic corpora are invaluable resources for understanding the complex relationship between language, perception, and action as they are based on naturalistic dialogue situations in which the interactants are given shared goals to be accomplished in the real world. In such situations, verbal interactions are intertwined with actions, and shared goals can only be achieved via dynamic negotiation processes based on common ground constructed from discourse history as well as the interactants’ knowledge about the status of actions. In this paper, we propose four major dimensions of collaborative tasks that affect the negotiation processes among interactants, and, hence, the structure of the dialogue. Based on a review of available dialogue corpora and annotation manuals, we show that existing annotation schemes so far do not adequately account for the complex dialogue processes in situated task-based scenarios. We illustrate the effects of specific features of a scenario using annotated samples of dialogue taken from the literature as well as our own corpora, and end with a brief discussion of the challenges ahead.

2010

This paper introduces a novel corpus of natural language dialogues obtained from humans performing a cooperative, remote, search task (CReST) as it occurs naturally in a variety of scenarios (e.g., search and rescue missions in disaster areas). This corpus is unique in that it involves remote collaborations between two interlocutors who each have to perform tasks that require the other's assistance. In addition, one interlocutor's tasks require physical movement through an indoor environment as well as interactions with physical objects within the environment. The multi-modal corpus contains the speech signals as well as transcriptions of the dialogues, which are additionally annotated for dialog structure, disfluencies, and for constituent and dependency syntax. On the dialogue level, the corpus was annotated for separate dialogue moves, based on the classification developed by Carletta et al. (1997) for coding task-oriented dialogues. Disfluencies were annotated using the scheme developed by Lickley (1998). The syntactic annotation comprises POS annotation, Penn Treebank style constituent annotations as well as dependency annotations based on the dependencies of pennconverter.