People in Language, Vision and the Mind (2020)


up

bib (full) Proceedings of LREC2020 Workshop "People in language, vision and the mind" (ONION2020)

pdf bib
Proceedings of LREC2020 Workshop "People in language, vision and the mind" (ONION2020)
Patrizia Paggio | Albert Gatt | Roman Klinger

pdf bib
Prototypes and Recognition of Self in Depictions of Christ
Carla Sophie Lembke | Per Olav Folgerø | Alf Edgar Andresen | Christer Johansson

We present a study on prototype effects. We designed an experiment investigating the effect of adapting a prototypical image towards more human, male or female, prototypes, and additionally investigating the effect of self-recognition in a manipulated image. Results show that decisions are affected by prototypicality, but we find less evidence that self-recognition further enhances perceptions of attractiveness. This study has implications for the psychological perception of faces, and may contribute to the study of Christian imagery.

pdf bib
Analysis of Body Behaviours in Human-Human and Human-Robot Interactions
Taiga Mori | Kristiina Jokinen | Yasuharu Den

We conducted preliminary comparison of human-robot (HR) interaction with human-human (HH) interaction conducted in English and in Japanese. As the result, body gestures increased in HR, while hand and head gestures decreased in HR. Concerning hand gesture, they were composed of more diverse and complex forms, trajectories and functions in HH than in HR. Moreover, English speakers produced 6 times more hand gestures than Japanese speakers in HH. Regarding head gesture, even though there was no difference in the frequency of head gestures between English speakers and Japanese speakers in HH, Japanese speakers produced slightly more nodding during the robot’s speaking than English speakers in HR. Furthermore, positions of nod were different depending on the language. Concerning body gesture, participants produced body gestures mostly to regulate appropriate distance with the robot in HR. Additionally, English speakers produced slightly more body gestures than Japanese speakers.

pdf bib
Automatic Detection and Classification of Head Movements in Face-to-Face Conversations
Patrizia Paggio | Manex Agirrezabal | Bart Jongejan | Costanza Navarretta

This paper presents an approach to automatic head movement detection and classification in data from a corpus of video-recorded face-to-face conversations in Danish involving 12 different speakers. A number of classifiers were trained with different combinations of visual, acoustic and word features and tested in a leave-one-out cross validation scenario. The visual movement features were extracted from the raw video data using OpenPose, and the acoustic ones using Praat. The best results were obtained by a Multilayer Perceptron classifier, which reached an average 0.68 F1 score across the 12 speakers for head movement detection, and 0.40 for head movement classification given four different classes. In both cases, the classifier outperformed a simple most frequent class baseline as well as a more advanced baseline only relying on velocity features.

pdf bib
“You move THIS!”: Annotation of Pointing Gestures on Tabletop Interfaces in Low Awareness Situations
Dimitra Anastasiou | Hoorieh Afkari | Valérie Maquil

This paper analyses pointing gestures during low awareness situations occurring in a collaborative problem-solving activity implemented on an interactive tabletop interface. Awareness is considered as crucial requirement to support fluid and natural collaboration. We focus on pointing gestures as strategy to maintain awareness. We describe the results from a user study with five groups, each group consisting of three participants, who were asked to solve a task collaboratively on a tabletop interface. The ideal problem-solving solution would have been, if the three participants had been fully aware of what their personal area is depicting and had communicated this properly to the peers. However, often some participants are hesitant due to lack of awareness, some other want to take the lead work or expedite the process, and therefore pointing gestures to others’ personal areas arise. Our results from analyzing a multimodal corpus of 168.68 minutes showed that in 95% of the cases, one user pointed to the personal area of the other, while in a few cases (3%) a user not only pointed, but also performed a touch gesture on the personal area of another user. In our study, the mean for such pointing gestures in low awareness situations per minute and for all groups was M=1.96, SD=0.58.

pdf bib
Improving Sentiment Analysis with Biofeedback Data
Daniel Schlör | Albin Zehe | Konstantin Kobs | Blerta Veseli | Franziska Westermeier | Larissa Brübach | Daniel Roth | Marc Erich Latoschik | Andreas Hotho

Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.