Torsten Wörtwein
2022
Beyond Additive Fusion: Learning Non-Additive Multimodal Interactions
Torsten Wörtwein
|
Lisa Sheeber
|
Nicholas Allen
|
Jeffrey Cohn
|
Louis-Philippe Morency
Findings of the Association for Computational Linguistics: EMNLP 2022
Multimodal fusion addresses the problem of analyzing spoken words in the multimodal context, including visual expressions and prosodic cues. Even when multimodal models lead to performance improvements, it is often unclear whether bimodal and trimodal interactions are learned or whether modalities are processed independently of each other. We propose Multimodal Residual Optimization (MRO) to separate unimodal, bimodal, and trimodal interactions in a multimodal model. This improves interpretability as the multimodal interaction can be quantified. Inspired by Occam’s razor, the main intuition of MRO is that (simpler) unimodal contributions should be learned before learning (more complex) bimodal and trimodal interactions. For example, bimodal predictions should learn to correct the mistakes (residuals) of unimodal predictions, thereby letting the bimodal predictions focus on the remaining bimodal interactions. Empirically, we observe that MRO successfully separates unimodal, bimodal, and trimodal interactions while not degrading predictive performance. We complement our empirical results with a human perception study and observe that MRO learns multimodal interactions that align with human judgments.
2016
A Multimodal Corpus for the Assessment of Public Speaking Ability and Anxiety
Mathieu Chollet
|
Torsten Wörtwein
|
Louis-Philippe Morency
|
Stefan Scherer
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
The ability to efficiently speak in public is an essential asset for many professions and is used in everyday life. As such, tools enabling the improvement of public speaking performance and the assessment and mitigation of anxiety related to public speaking would be very useful. Multimodal interaction technologies, such as computer vision and embodied conversational agents, have recently been investigated for the training and assessment of interpersonal skills. Once central requirement for these technologies is multimodal corpora for training machine learning models. This paper addresses the need of these technologies by presenting and sharing a multimodal corpus of public speaking presentations. These presentations were collected in an experimental study investigating the potential of interactive virtual audiences for public speaking training. This corpus includes audio-visual data and automatically extracted features, measures of public speaking anxiety and personality, annotations of participants’ behaviors and expert ratings of behavioral aspects and overall performance of the presenters. We hope this corpus will help other research teams in developing tools for supporting public speaking training.
Search
Co-authors
- Louis-Philippe Morency 2
- Lisa Sheeber 1
- Nicholas Allen 1
- Jeffrey Cohn 1
- Mathieu Chollet 1
- show all...