HateBERT: Retraining BERT for Abusive Language Detection in English

Tommaso Caselli, Valerio Basile, Jelena Mitrović, Michael Granitzer


Abstract
We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.
Anthology ID:
2021.woah-1.3
Volume:
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Aida Mostafazadeh Davani, Douwe Kiela, Mathias Lambert, Bertie Vidgen, Vinodkumar Prabhakaran, Zeerak Waseem
Venue:
WOAH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–25
Language:
URL:
https://aclanthology.org/2021.woah-1.3
DOI:
10.18653/v1/2021.woah-1.3
Bibkey:
Cite (ACL):
Tommaso Caselli, Valerio Basile, Jelena Mitrović, and Michael Granitzer. 2021. HateBERT: Retraining BERT for Abusive Language Detection in English. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 17–25, Online. Association for Computational Linguistics.
Cite (Informal):
HateBERT: Retraining BERT for Abusive Language Detection in English (Caselli et al., WOAH 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.woah-1.3.pdf
Video:
 https://aclanthology.org/2021.woah-1.3.mp4
Code
 tommasoc80/HateBERT
Data
HatEval