Youssef Hmamouche
2020
Multimodal Corpus of Bidirectional Conversation of Human-human and Human-robot Interaction during fMRI Scanning
Birgit Rauchbauer
|
Youssef Hmamouche
|
Brigitte Bigi
|
Laurent Prévot
|
Magalie Ochs
|
Thierry Chaminade
Proceedings of the Twelfth Language Resources and Evaluation Conference
In this paper we present investigation of real-life, bi-directional conversations. We introduce the multimodal corpus derived from these natural conversations alternating between human-human and human-robot interactions. The human-robot interactions were used as a control condition for the social nature of the human-human conversations. The experimental set up consisted of conversations between the participant in a functional magnetic resonance imaging (fMRI) scanner and a human confederate or conversational robot outside the scanner room, connected via bidirectional audio and unidirectional videoconferencing (from the outside to inside the scanner). A cover story provided a framework for natural, real-life conversations about images of an advertisement campaign. During the conversations we collected a multimodal corpus for a comprehensive characterization of bi-directional conversations. In this paper we introduce this multimodal corpus which includes neural data from functional magnetic resonance imaging (fMRI), physiological data (blood flow pulse and respiration), transcribed conversational data, as well as face and eye-tracking recordings. Thus, we present a unique corpus to study human conversations including neural, physiological and behavioral data.
BrainPredict: a Tool for Predicting and Visualising Local Brain Activity
Youssef Hmamouche
|
Laurent Prévot
|
Magalie Ochs
|
Thierry Chaminade
Proceedings of the Twelfth Language Resources and Evaluation Conference
In this paper, we present a tool allowing dynamic prediction and visualization of an individual’s local brain activity during a conversation. The prediction module of this tool is based on classifiers trained using a corpus of human-human and human-robot conversations including fMRI recordings. More precisely, the module takes as input behavioral features computed from raw data, mainly the participant and the interlocutor speech but also the participant’s visual input and eye movements. The visualisation module shows in real-time the dynamics of brain active areas synchronised with the behavioral raw data. In addition, it shows which integrated behavioral features are used to predict the activity in individual brain areas.