XLP at SemEval-2020 Task 9: Cross-lingual Models with Focal Loss for Sentiment Analysis of Code-Mixing Language

Yili Ma, Liang Zhao, Jie Hao


Abstract
In this paper, we present an approach for sentiment analysis in code-mixed language on twitter defined in SemEval-2020 Task 9. Our team (referred as LiangZhao) employ different multilingual models with weighted loss focused on complexity of code-mixing in sentence, in which the best model achieved f1-score of 0.806 and ranked 1st of subtask- Sentimix Spanglish. The performance of method is analyzed and each component of our architecture is demonstrated.
Anthology ID:
2020.semeval-1.126
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
975–980
Language:
URL:
https://aclanthology.org/2020.semeval-1.126
DOI:
10.18653/v1/2020.semeval-1.126
Bibkey:
Cite (ACL):
Yili Ma, Liang Zhao, and Jie Hao. 2020. XLP at SemEval-2020 Task 9: Cross-lingual Models with Focal Loss for Sentiment Analysis of Code-Mixing Language. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 975–980, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
XLP at SemEval-2020 Task 9: Cross-lingual Models with Focal Loss for Sentiment Analysis of Code-Mixing Language (Ma et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.126.pdf