A Multi-modal Personality Prediction System

Chanchal Suman, Aditya Gupta, Sriparna Saha, Pushpak Bhattacharyya


Abstract
Automatic prediction of personality traits has many real-life applications, e.g., in forensics, recommender systems, personalized services etc.. In this work, we have proposed a solution framework for solving the problem of predicting the personality traits of a user from videos. Ambient, facial and the audio features are extracted from the video of the user. These features are used for the final output prediction. The visual and audio modalities are combined in two different ways: averaging of predictions obtained from the individual modalities, and concatenation of features in multi-modal setting. The dataset released in Chalearn-16 is used for evaluating the performance of the system. Experimental results illustrate that it is possible to obtain better performance with a hand full of images, rather than using all the images present in the video
Anthology ID:
2020.icon-main.42
Volume:
Proceedings of the 17th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2020
Address:
Indian Institute of Technology Patna, Patna, India
Editors:
Pushpak Bhattacharyya, Dipti Misra Sharma, Rajeev Sangal
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
317–322
Language:
URL:
https://aclanthology.org/2020.icon-main.42
DOI:
Bibkey:
Cite (ACL):
Chanchal Suman, Aditya Gupta, Sriparna Saha, and Pushpak Bhattacharyya. 2020. A Multi-modal Personality Prediction System. In Proceedings of the 17th International Conference on Natural Language Processing (ICON), pages 317–322, Indian Institute of Technology Patna, Patna, India. NLP Association of India (NLPAI).
Cite (Informal):
A Multi-modal Personality Prediction System (Suman et al., ICON 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.icon-main.42.pdf