FiSSA at SemEval-2020 Task 9: Fine-tuned for Feelings

Bertelt Braaksma, Richard Scholtens, Stan van Suijlekom, Remy Wang, Ahmet Üstün


Abstract
In this paper, we present our approach for sentiment classification on Spanish-English code-mixed social media data in the SemEval-2020 Task 9. We investigate performance of various pre-trained Transformer models by using different fine-tuning strategies. We explore both monolingual and multilingual models with the standard fine-tuning method. Additionally, we propose a custom model that we fine-tune in two steps: once with a language modeling objective, and once with a task-specific objective. Although two-step fine-tuning improves sentiment classification performance over the base model, the large multilingual XLM-RoBERTa model achieves best weighted F1-score with 0.537 on development data and 0.739 on test data. With this score, our team jupitter placed tenth overall in the competition.
Anthology ID:
2020.semeval-1.165
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1239–1246
Language:
URL:
https://aclanthology.org/2020.semeval-1.165
DOI:
10.18653/v1/2020.semeval-1.165
Bibkey:
Cite (ACL):
Bertelt Braaksma, Richard Scholtens, Stan van Suijlekom, Remy Wang, and Ahmet Üstün. 2020. FiSSA at SemEval-2020 Task 9: Fine-tuned for Feelings. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1239–1246, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
FiSSA at SemEval-2020 Task 9: Fine-tuned for Feelings (Braaksma et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.165.pdf
Code
 barfsma/FiSSA