Elisabetta Bevacqua


2014

pdf bib
A Database of Full Body Virtual Interactions Annotated with Expressivity Scores
Demulier Virginie | Elisabetta Bevacqua | Florian Focone | Tom Giraud | Pamela Carreno | Brice Isableu | Sylvie Gibet | Pierre De Loor | Jean-Claude Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided.

2010

pdf bib
The AVLaughterCycle Database
Jérôme Urbain | Elisabetta Bevacqua | Thierry Dutoit | Alexis Moinet | Radoslaw Niewiadomski | Catherine Pelachaud | Benjamin Picart | Joëlle Tilmanne | Johannes Wagner
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTERFACE’09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects’ face. Annotations of the recordings, focusing on laughter description, are also provided and exhibited in this paper. In total, the corpus contains more than 1000 spontaneous laughs and 27 acted laughs. The laughter utterances are highly variable: the laughter duration ranges from 250ms to 82s and the sounds cover voiced vowels, breath-like expirations, hum-, hiccup- or grunt-like sounds, etc. However, as the subjects had no one to interact with, the database contains very few speech-laughs. Acted laughs tend to be longer than spontaneous ones and are more often composed of voiced vowels. The database can be useful for automatic laughter processing or cognitive science works. For the AVLaughterCycle project, it has served to animate a laughing virtual agent with an output laugh linked to the conversational partner’s input laugh.