Fine-Grained Propaganda Detection with Fine-Tuned BERT

Shehel Yoosuf, Yin Yang


Abstract
This paper presents the winning solution of the Fragment Level Classification (FLC) task in the Fine Grained Propaganda Detection competition at the NLP4IF’19 workshop. The goal of the FLC task is to detect and classify textual segments that correspond to one of the 18 given propaganda techniques in a news articles dataset. The main idea of our solution is to perform word-level classification using fine-tuned BERT, a popular pre-trained language model. Besides presenting the model and its evaluation results, we also investigate the attention heads in the model, which provide insights into what the model learns, as well as aspects for potential improvements.
Anthology ID:
D19-5011
Volume:
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Anna Feldman, Giovanni Da San Martino, Alberto Barrón-Cedeño, Chris Brew, Chris Leberknight, Preslav Nakov
Venue:
NLP4IF
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
87–91
Language:
URL:
https://aclanthology.org/D19-5011
DOI:
10.18653/v1/D19-5011
Bibkey:
Cite (ACL):
Shehel Yoosuf and Yin Yang. 2019. Fine-Grained Propaganda Detection with Fine-Tuned BERT. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 87–91, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Fine-Grained Propaganda Detection with Fine-Tuned BERT (Yoosuf & Yang, NLP4IF 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5011.pdf