David McNeill


2020

pdf bib
Learning Word Groundings from Humans Facilitated by Robot Emotional Displays
David McNeill | Casey Kennington
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

In working towards accomplishing a human-level acquisition and understanding of language, a robot must meet two requirements: the ability to learn words from interactions with its physical environment, and the ability to learn language from people in settings for language use, such as spoken dialogue. In a live interactive study, we test the hypothesis that emotional displays are a viable solution to the cold-start problem of how to communicate without relying on language the robot does not–indeed, cannot–yet know. We explain our modular system that can autonomously learn word groundings through interaction and show through a user study with 21 participants that emotional displays improve the quantity and quality of the inputs provided to the robot.

pdf bib
rrSDS: Towards a Robot-ready Spoken Dialogue System
Casey Kennington | Daniele Moro | Lucas Marchand | Jake Carns | David McNeill
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Spoken interaction with a physical robot requires a dialogue system that is modular, multimodal, distributive, incremental and temporally aligned. In this demo paper, we make significant contributions towards fulfilling these requirements by expanding upon the ReTiCo incremental framework. We outline the incremental and multimodal modules and how their computation can be distributed. We demonstrate the power and flexibility of our robot-ready spoken dialogue system to be integrated with almost any robot.

2007

pdf bib
SIDGRID: A Framework for Distributed and Integrated Multimodal Annotation and Archiving and and Analysis
Gina-Anne Levow | Bennett Bertenthal | Mark Hereld | Sarah Kenny | David McNeill | Michael Papka | Sonjia Waxmonsky
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue