Pretrained Ensemble Learning for Fine-Grained Propaganda Detection

Ali Fadel, Ibraheem Tuffaha, Mahmoud Al-Ayyoub


Abstract
In this paper, we describe our team’s effort on the fine-grained propaganda detection on sentence level classification (SLC) task of NLP4IF 2019 workshop co-located with the EMNLP-IJCNLP 2019 conference. Our top performing system results come from applying ensemble average on three pretrained models to make their predictions. The first two models use the uncased and cased versions of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) while the third model uses Universal Sentence Encoder (USE) (Cer et al. 2018). Out of 26 participating teams, our system is ranked in the first place with 68.8312 F1-score on the development dataset and in the sixth place with 61.3870 F1-score on the testing dataset.
Anthology ID:
D19-5020
Volume:
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Anna Feldman, Giovanni Da San Martino, Alberto Barrón-Cedeño, Chris Brew, Chris Leberknight, Preslav Nakov
Venue:
NLP4IF
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
139–142
Language:
URL:
https://aclanthology.org/D19-5020
DOI:
10.18653/v1/D19-5020
Bibkey:
Cite (ACL):
Ali Fadel, Ibraheem Tuffaha, and Mahmoud Al-Ayyoub. 2019. Pretrained Ensemble Learning for Fine-Grained Propaganda Detection. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 139–142, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Pretrained Ensemble Learning for Fine-Grained Propaganda Detection (Fadel et al., NLP4IF 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5020.pdf