EmojiGAN: learning emojis distributions with a generative model

Bogdan Mazoure, Thang Doan, Saibal Ray


Abstract
Generative models have recently experienced a surge in popularity due to the development of more efficient training algorithms and increasing computational power. Models such as adversarial generative networks (GANs) have been successfully used in various areas such as computer vision, medical imaging, style transfer and natural language generation. Adversarial nets were recently shown to yield results in the image-to-text task, where given a set of images, one has to provide their corresponding text description. In this paper, we take a similar approach and propose a image-to-emoji architecture, which is trained on data from social networks and can be used to score a given picture using ideograms. We show empirical results of our algorithm on data obtained from the most influential Instagram accounts.
Anthology ID:
W18-6240
Volume:
Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Alexandra Balahur, Saif M. Mohammad, Veronique Hoste, Roman Klinger
Venue:
WASSA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
273–279
Language:
URL:
https://aclanthology.org/W18-6240
DOI:
10.18653/v1/W18-6240
Bibkey:
Cite (ACL):
Bogdan Mazoure, Thang Doan, and Saibal Ray. 2018. EmojiGAN: learning emojis distributions with a generative model. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 273–279, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
EmojiGAN: learning emojis distributions with a generative model (Mazoure et al., WASSA 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6240.pdf