Learning Multimodal Gender Profile using Neural Networks

Carlos Pérez Estruch, Roberto Paredes Palacios, Paolo Rosso


Abstract
Gender identification in social networks is one of the most popular aspects of user profile learning. Traditionally it has been linked to author profiling, a difficult problem to solve because of the little difference in the use of language between genders. This situation has led to the need of taking into account other information apart from textual data, favoring the emergence of multimodal data. The aim of this paper is to apply neural networks to perform data fusion, using an existing multimodal corpus, the NUS-MSS data set, that (not only) contains text data, but also image and location information. We improved previous results in terms of macro accuracy (87.8%) obtaining the state-of-the-art performance of 91.3%.
Anthology ID:
R17-1075
Volume:
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
Month:
September
Year:
2017
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
577–582
Language:
URL:
https://doi.org/10.26615/978-954-452-049-6_075
DOI:
10.26615/978-954-452-049-6_075
Bibkey:
Cite (ACL):
Carlos Pérez Estruch, Roberto Paredes Palacios, and Paolo Rosso. 2017. Learning Multimodal Gender Profile using Neural Networks. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 577–582, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Learning Multimodal Gender Profile using Neural Networks (Pérez Estruch et al., RANLP 2017)
Copy Citation:
PDF:
https://doi.org/10.26615/978-954-452-049-6_075