Separating Hate Speech and Offensive Language Classes via Adversarial Debiasing

Shuzhou Yuan, Antonis Maronikolakis, Hinrich Schütze


Abstract
Research to tackle hate speech plaguing online media has made strides in providing solutions, analyzing bias and curating data. A challenging problem is ambiguity between hate speech and offensive language, causing low performance both overall and specifically for the hate speech class. It can be argued that misclassifying actual hate speech content as merely offensive can lead to further harm against targeted groups. In our work, we mitigate this potentially harmful phenomenon by proposing an adversarial debiasing method to separate the two classes. We show that our method works for English, Arabic German and Hindi, plus in a multilingual setting, improving performance over baselines.
Anthology ID:
2022.woah-1.1
Volume:
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
Month:
July
Year:
2022
Address:
Seattle, Washington (Hybrid)
Editors:
Kanika Narang, Aida Mostafazadeh Davani, Lambert Mathias, Bertie Vidgen, Zeerak Talat
Venue:
WOAH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/2022.woah-1.1
DOI:
10.18653/v1/2022.woah-1.1
Bibkey:
Cite (ACL):
Shuzhou Yuan, Antonis Maronikolakis, and Hinrich Schütze. 2022. Separating Hate Speech and Offensive Language Classes via Adversarial Debiasing. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 1–10, Seattle, Washington (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Separating Hate Speech and Offensive Language Classes via Adversarial Debiasing (Yuan et al., WOAH 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.woah-1.1.pdf
Video:
 https://aclanthology.org/2022.woah-1.1.mp4
Code
 shuzhouyuan/hate_speech_adversarial_debiasing
Data
Hate Speech and Offensive Language