UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models

Hassan Alhuzali, Mohamed Elaraby, Muhammad Abdul-Mageed


Abstract
We describe UBC-NLP contribution to IEST-2018, focused at learning implicit emotion in Twitter data. Among the 30 participating teams, our system ranked the 4th (with 69.3% F-score). Post competition, we were able to score slightly higher than the 3rd ranking system (reaching 70.7%). Our system is trained on top of a pre-trained language model (LM), fine-tuned on the data provided by the task organizers. Our best results are acquired by an average of an ensemble of language models. We also offer an analysis of system performance and the impact of training data size on the task. For example, we show that training our best model for only one epoch with < 40% of the data enables better performance than the baseline reported by Klinger et al. (2018) for the task.
Anthology ID:
W18-6250
Volume:
Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Alexandra Balahur, Saif M. Mohammad, Veronique Hoste, Roman Klinger
Venue:
WASSA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
342–347
Language:
URL:
https://aclanthology.org/W18-6250
DOI:
10.18653/v1/W18-6250
Bibkey:
Cite (ACL):
Hassan Alhuzali, Mohamed Elaraby, and Muhammad Abdul-Mageed. 2018. UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 342–347, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models (Alhuzali et al., WASSA 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6250.pdf