Ingmar Weber
2019
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Thomas Davidson
|
Debasmita Bhattacharya
|
Ingmar Weber
Proceedings of the Third Workshop on Abusive Language Online
Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will therefore have a disproportionate negative impact on African-American social media users. Consequently, these systems may discriminate against the groups who are often the targets of the abuse we are trying to detect.
2017
Understanding Abuse: A Typology of Abusive Language Detection Subtasks
Zeerak Waseem
|
Thomas Davidson
|
Dana Warmsley
|
Ingmar Weber
Proceedings of the First Workshop on Abusive Language Online
As the body of research on abusive language detection and analysis grows, there is a need for critical consideration of the relationships between different subtasks that have been grouped under this label. Based on work on hate speech, cyberbullying, and online abuse we propose a typology that captures central similarities and differences between subtasks and discuss the implications of this for data annotation and feature construction. We emphasize the practical actions that can be taken by researchers to best approach their abusive language detection subtask of interest.