Capturing Covertly Toxic Speech via Crowdsourcing

Alyssa Lees, Daniel Borkan, Ian Kivlichan, Jorge Nario, Tesh Goyal


Abstract
We study the task of labeling covert or veiled toxicity in online conversations. Prior research has highlighted the difficulty in creating language models that recognize nuanced toxicity such as microaggressions. Our investigations further underscore the difficulty in parsing such labels reliably from raters via crowdsourcing. We introduce an initial dataset, COVERTTOXICITY, which aims to identify and categorize such comments from a refined rater template. Finally, we fine-tune a comment-domain BERT model to classify covertly offensive comments and compare against existing baselines.
Anthology ID:
2021.hcinlp-1.3
Volume:
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing
Month:
April
Year:
2021
Address:
Online
Venues:
EACL | HCINLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14–20
Language:
URL:
https://aclanthology.org/2021.hcinlp-1.3
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.hcinlp-1.3.pdf
Data
SBIC