Recalibrating classifiers for interpretable abusive content detection

Bertie Vidgen, Scott Hale, Sam Staton, Tom Melham, Helen Margetts, Ohad Kammar, Marcin Szymczak


Abstract
We investigate the use of machine learning classifiers for detecting online abuse in empirical research. We show that uncalibrated classifiers (i.e. where the ‘raw’ scores are used) align poorly with human evaluations. This limits their use for understanding the dynamics, patterns and prevalence of online abuse. We examine two widely used classifiers (created by Perspective and Davidson et al.) on a dataset of tweets directed against candidates in the UK’s 2017 general election. A Bayesian approach is presented to recalibrate the raw scores from the classifiers, using probabilistic programming and newly annotated data. We argue that interpretability evaluation and recalibration is integral to the application of abusive content classifiers.
Anthology ID:
2020.nlpcss-1.14
Volume:
Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | NLP+CSS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
132–138
Language:
URL:
https://aclanthology.org/2020.nlpcss-1.14
DOI:
10.18653/v1/2020.nlpcss-1.14
Bibkey:
Cite (ACL):
Bertie Vidgen, Scott Hale, Sam Staton, Tom Melham, Helen Margetts, Ohad Kammar, and Marcin Szymczak. 2020. Recalibrating classifiers for interpretable abusive content detection. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science, pages 132–138, Online. Association for Computational Linguistics.
Cite (Informal):
Recalibrating classifiers for interpretable abusive content detection (Vidgen et al., NLP+CSS 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nlpcss-1.14.pdf
Optional supplementary material:
 2020.nlpcss-1.14.OptionalSupplementaryMaterial.zip