PGSG at SemEval-2020 Task 12: BERT-LSTM with Tweets’ Pretrained Model and Noisy Student Training Method

Bao-Tran Pham-Hong, Setu Chokshi


Abstract
The paper presents a system developed for the SemEval-2020 competition Task 12 (OffensEval-2): Multilingual Offensive Language Identification in Social Media. We achieve the second place (2nd) in sub-task B: Automatic categorization of offense types and are ranked 55th with a macro F1-score of 90.59 in sub-task A: Offensive language identification. Our solution is using a stack of BERT and LSTM layers, training with the Noisy Student method. Since the tweets data contains a large number of noisy words and slang, we update the vocabulary of the BERT large model pre-trained by the Google AI Language team. We fine-tune the model with tweet sentences provided in the challenge.
Anthology ID:
2020.semeval-1.280
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
2111–2116
Language:
URL:
https://aclanthology.org/2020.semeval-1.280
DOI:
10.18653/v1/2020.semeval-1.280
Bibkey:
Cite (ACL):
Bao-Tran Pham-Hong and Setu Chokshi. 2020. PGSG at SemEval-2020 Task 12: BERT-LSTM with Tweets’ Pretrained Model and Noisy Student Training Method. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2111–2116, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
PGSG at SemEval-2020 Task 12: BERT-LSTM with Tweets’ Pretrained Model and Noisy Student Training Method (Pham-Hong & Chokshi, SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.280.pdf