Donnie Parent


2025

pdf bib
Annotating Hate Speech towards Identity Groups
Donnie Parent | Nina Georgiades | Charvi Mishra | Khaled Mohammed | Sandra Kübler
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era

Detecting hate speech, especially implicit hate speech, is a difficult task. We focus on annotating implicit hate targeting identity groups. We describe our dataset, which is a subset of AbuseEval (Caselli et al., 2020) and our annotation process for implicit identity hate. We annotate the type of abuse, the type of identity abuse, and the target identity group. We then discuss cases that annotators disagreed on and provide dataset statistics. Finally, we calculate our inter-annotator agreement.

pdf bib
On the Interaction of Identity Hate Classification and Data Bias
Donnie Parent | Nina Georgiades | Charvi Mishra | Khaled Mohammed | Sandra Kübler
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era

Hate speech detection is a task where machine learning models tend to be limited by the biases introduced by the dataset. We use two existing datasets of hate speech towards identity groups, the one by Wiegand et al. (2022) and a reannotated subset of the data in AbuseEval (Caselli et al. 2020). Since the data by Wiegand et al. (2022) were collected using one syntactic pattern, there exists a possible syntactic bias in this dataset. We test whether there exists such a bias by using a more syntactically general dataset for testing. Our findings show that classifiers trained on the dataset with the syntactic bias and tested on a less constrained dataset suffer from a loss in performance in the order of 20 points. Further experiments show that this drop can only be partly attributed to a shift in identity groups between datasets.