Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans

Rami Younes, Gérard Bailly, Frederic Elisei, Damien Pellier


Abstract
Verbal and nonverbal communication skills are essential for human-robot interaction, in particular when the agents are involved in a shared task. We address the specific situation when the robot is the only agent knowing about the plan and the goal of the task and has to instruct the human partner. The case study is a brick assembly. We here describe a multi-layered verbal depictor whose semantic, syntactic and lexical settings have been collected and evaluated via crowdsourcing. One crowdsourced experiment involves a robot instructed pick-and-place task. We show that implicitly referring to achieved subgoals (stairs, pillows, etc) increases performance of human partners.
Anthology ID:
2022.sigdial-1.17
Volume:
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
September
Year:
2022
Address:
Edinburgh, UK
Editors:
Oliver Lemon, Dilek Hakkani-Tur, Junyi Jessy Li, Arash Ashrafzadeh, Daniel Hernández Garcia, Malihe Alikhani, David Vandyke, Ondřej Dušek
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
159–171
Language:
URL:
https://aclanthology.org/2022.sigdial-1.17
DOI:
10.18653/v1/2022.sigdial-1.17
Bibkey:
Cite (ACL):
Rami Younes, Gérard Bailly, Frederic Elisei, and Damien Pellier. 2022. Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 159–171, Edinburgh, UK. Association for Computational Linguistics.
Cite (Informal):
Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans (Younes et al., SIGDIAL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.sigdial-1.17.pdf
Video:
 https://youtu.be/PbLerk3eURo