JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques Using BERT Pre-trained Model

Ola Altiti, Malak Abdullah, Rasha Obiedat


Abstract
This paper presents the submission to semeval-2020 task 11, Detection of Propaganda Techniques in News Articles. Knowing that there are two subtasks in this competition, we have participated in the Technique Classification subtask (TC), which aims to identify the propaganda techniques used in a specific propaganda span. We have used and implemented various models to detect propaganda. Our proposed model is based on BERT uncased pre-trained language model as it has achieved state-of-the-art performance on multiple NLP benchmarks. The performance results of our proposed model have scored 0.55307 F1-Score, which outperforms the baseline model provided by the organizers with 0.2519 F1-Score, and our model is 0.07 away from the best performing team. Compared to other participating systems, our submission is ranked 15th out of 31 participants.
Anthology ID:
2020.semeval-1.229
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1749–1755
Language:
URL:
https://aclanthology.org/2020.semeval-1.229
DOI:
10.18653/v1/2020.semeval-1.229
Bibkey:
Cite (ACL):
Ola Altiti, Malak Abdullah, and Rasha Obiedat. 2020. JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques Using BERT Pre-trained Model. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1749–1755, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques Using BERT Pre-trained Model (Altiti et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.229.pdf