Jorge Nario
2021
Capturing Covertly Toxic Speech via Crowdsourcing
Alyssa Lees
|
Daniel Borkan
|
Ian Kivlichan
|
Jorge Nario
|
Tesh Goyal
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing
We study the task of labeling covert or veiled toxicity in online conversations. Prior research has highlighted the difficulty in creating language models that recognize nuanced toxicity such as microaggressions. Our investigations further underscore the difficulty in parsing such labels reliably from raters via crowdsourcing. We introduce an initial dataset, COVERTTOXICITY, which aims to identify and categorize such comments from a refined rater template. Finally, we fine-tune a comment-domain BERT model to classify covertly offensive comments and compare against existing baselines.
Search