Achieving Common Ground in Multi-modal Dialogue

Malihe Alikhani, Matthew Stone


Abstract
All communication aims at achieving common ground (grounding): interlocutors can work together effectively only with mutual beliefs about what the state of the world is, about what their goals are, and about how they plan to make their goals a reality. Computational dialogue research offers some classic results on grouding, which unfortunately offer scant guidance to the design of grounding modules and behaviors in cutting-edge systems. In this tutorial, we focus on three main topic areas: 1) grounding in human-human communication; 2) grounding in dialogue systems; and 3) grounding in multi-modal interactive systems, including image-oriented conversations and human-robot interactions. We highlight a number of achievements of recent computational research in coordinating complex content, show how these results lead to rich and challenging opportunities for doing grounding in more flexible and powerful ways, and canvass relevant insights from the literature on human–human conversation. We expect that the tutorial will be of interest to researchers in dialogue systems, computational semantics and cognitive modeling, and hope that it will catalyze research and system building that more directly explores the creative, strategic ways conversational agents might be able to seek and offer evidence about their understanding of their interlocutors.
Anthology ID:
2020.acl-tutorials.3
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10–15
Language:
URL:
https://aclanthology.org/2020.acl-tutorials.3
DOI:
10.18653/v1/2020.acl-tutorials.3
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-tutorials.3.pdf