Multimodal Resources for Human-Robot Communication Modelling

Stavroula–Evita Fotinea, Eleni Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, Kyriaki Vasilaki


Abstract
This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions in interaction, their capture and their representation in terms of behavioural patterns that, in turn, feed a multimodal human-robot communication system. Semantic analysis encompasses both oral and sign languages, as well as both verbal and non-verbal communicative signals to achieve an effective, natural interaction between elderly users with slight walking and cognitive inability and an assistive robotic platform.
Anthology ID:
L16-1551
Volume:
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Month:
May
Year:
2016
Address:
Portorož, Slovenia
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
3455–3460
Language:
URL:
https://aclanthology.org/L16-1551
DOI:
Bibkey:
Cite (ACL):
Stavroula–Evita Fotinea, Eleni Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, and Kyriaki Vasilaki. 2016. Multimodal Resources for Human-Robot Communication Modelling. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3455–3460, Portorož, Slovenia. European Language Resources Association (ELRA).
Cite (Informal):
Multimodal Resources for Human-Robot Communication Modelling (Fotinea et al., LREC 2016)
Copy Citation:
PDF:
https://aclanthology.org/L16-1551.pdf