Fight Fire with Fire: Fine-tuning Hate Detectors using Large Samples of Generated Hate Speech

Tomer Wullach, Amir Adler, Einat Minkov


Abstract
Automatic hate speech detection is hampered by the scarcity of labeled datasetd, leading to poor generalization. We employ pretrained language models (LMs) to alleviate this data bottleneck. We utilize the GPT LM for generating large amounts of synthetic hate speech sequences from available labeled examples, and leverage the generated data in fine-tuning large pretrained LMs on hate detection. An empirical study using the models of BERT, RoBERTa and ALBERT, shows that this approach improves generalization significantly and consistently within and across data distributions. In fact, we find that generating relevant labeled hate speech sequences is preferable to using out-of-domain, and sometimes also within-domain, human-labeled examples.
Anthology ID:
2021.findings-emnlp.402
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4699–4705
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.402
DOI:
10.18653/v1/2021.findings-emnlp.402
Bibkey:
Cite (ACL):
Tomer Wullach, Amir Adler, and Einat Minkov. 2021. Fight Fire with Fire: Fine-tuning Hate Detectors using Large Samples of Generated Hate Speech. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4699–4705, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Fight Fire with Fire: Fine-tuning Hate Detectors using Large Samples of Generated Hate Speech (Wullach et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.402.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.402.mp4
Data
Hate Speech