pdf
bib
Proceedings of the Workshop on NLG for Human–Robot Interaction
Mary Ellen Foster
|
Hendrik Buschmeier
|
Dimitra Gkatzia
pdf
bib
abs
Context-sensitive Natural Language Generation for robot-assisted second language tutoring
Bram Willemsen
|
Jan de Wit
|
Emiel Krahmer
|
Mirjam de Haas
|
Paul Vogt
This paper describes the L2TOR intelligent tutoring system (ITS), focusing primarily on its output generation module. The L2TOR ITS is developed for the purpose of investigating the efficacy of robot-assisted second language tutoring in early childhood. We explain the process of generating contextually-relevant utterances, such as task-specific feedback messages, and discuss challenges regarding multimodality and multilingualism for situated natural language generation from a robot tutoring perspective.
pdf
bib
abs
Learning from limited datasets: Implications for Natural Language Generation and Human-Robot Interaction
Jekaterina Belakova
|
Dimitra Gkatzia
One of the most natural ways for human robot communication is through spoken language. Training human-robot interaction systems require access to large datasets which are expensive to obtain and labour intensive. In this paper, we describe an approach for learning from minimal data, using as a toy example language understanding in spoken dialogue systems. Understanding of spoken language is crucial because it has implications for natural language generation, i.e. correctly understanding a user’s utterance will lead to choosing the right response/action. Finally, we discuss implications for Natural Language Generation in Human-Robot Interaction.
pdf
bib
abs
Shaping a social robot’s humor with Natural Language Generation and socially-aware reinforcement learning
Hannes Ritschel
|
Elisabeth André
Humor is an important aspect in human interaction to regulate conversations, increase interpersonal attraction and trust. For social robots, humor is one aspect to make interactions more natural, enjoyable, and to increase credibility and acceptance. In combination with appropriate non-verbal behavior, natural language generation offers the ability to create content on-the-fly. This work outlines the building-blocks for providing an individual, multimodal interaction experience by shaping the robot’s humor with the help of Natural Language Generation and Reinforcement Learning based on human social signals.
pdf
bib
abs
From sensors to sense: Integrated heterogeneous ontologies for Natural Language Generation
Mihai Pomarlan
|
Robert Porzel
|
John Bateman
|
Rainer Malaka
We propose the combination of a robotics ontology (KnowRob) with a linguistically motivated one (GUM) under the upper ontology DUL. We use the DUL Event, Situation, Description pattern to formalize reasoning techniques to convert between a robot’s beliefstate and its linguistic utterances. We plan to employ these techniques to equip robots with a reason-aloud ability, through which they can explain their actions as they perform them, in natural language, at a level of granularity appropriate to the user, their query and the context at hand.
pdf
bib
abs
A farewell to arms: Non-verbal communication for non-humanoid robots
Aaron G. Cass
|
Kristina Striegnitz
|
Nick Webb
Human-robot interactions situated in a dynamic environment create a unique mix of challenges for conversational systems. We argue that, on the one hand, NLG can contribute to addressing these challenges and that, on the other hand, they pose interesting research problems for NLG. To illustrate our position we describe our research on non-humanoid robots using non-verbal signals to support communication.
pdf
bib
abs
Being data-driven is not enough: Revisiting interactive instruction giving as a challenge for NLG
Sina Zarrieß
|
David Schlangen
Modeling traditional NLG tasks with data-driven techniques has been a major focus of research in NLG in the past decade. We argue that existing modeling techniques are mostly tailored to textual data and are not sufficient to make NLG technology meet the requirements of agents which target fluid interaction and collaboration in the real world. We revisit interactive instruction giving as a challenge for datadriven NLG and, based on insights from previous GIVE challenges, propose that instruction giving should be addressed in a setting that involves visual grounding and spoken language. These basic design decisions will require NLG frameworks that are capable of monitoring their environment as well as timing and revising their verbal output. We believe that these are core capabilities for making NLG technology transferrable to interactive systems.