Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection

Nicolás Benjamín Ocampo, Elena Cabrio, Serena Villata


Abstract
Research on abusive content detection on social media has primarily focused on explicit forms of hate speech (HS), that are often identifiable by recognizing hateful words and expressions. Messages containing linguistically subtle and implicit forms of hate speech still constitute an open challenge for automatic hate speech detection. In this paper, we propose a new framework for generating adversarial implicit HS short-text messages using Auto-regressive Language Models. Moreover, we propose a strategy to group the generated implicit messages in complexity levels (EASY, MEDIUM, and HARD categories) characterizing how challenging these messages are for supervised classifiers. Finally, relying on (Dinan et al., 2019; Vidgen et al., 2021), we propose a “build it, break it, fix it”, training scheme using HARD messages showing how iteratively retraining on HARD messages substantially leverages SOTA models’ performances on implicit HS benchmarks.
Anthology ID:
2023.findings-acl.173
Original:
2023.findings-acl.173v1
Version 2:
2023.findings-acl.173v2
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2758–2772
Language:
URL:
https://aclanthology.org/2023.findings-acl.173
DOI:
10.18653/v1/2023.findings-acl.173
Bibkey:
Cite (ACL):
Nicolás Benjamín Ocampo, Elena Cabrio, and Serena Villata. 2023. Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2758–2772, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection (Ocampo et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.173.pdf