Ian Kivlichan


2021

pdf bib
Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation
Ian Kivlichan | Zi Lin | Jeremiah Liu | Lucy Vasserman
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)

Content moderation is often performed by a collaboration between humans and machine learning models. However, it is not well understood how to design the collaborative process so as to maximize the combined moderator-model system performance. This work presents a rigorous study of this problem, focusing on an approach that incorporates model uncertainty into the collaborative process. First, we introduce principled metrics to describe the performance of the collaborative system under capacity constraints on the human moderator, quantifying how efficiently the combined system utilizes human decisions. Using these metrics, we conduct a large benchmark study evaluating the performance of state-of-the-art uncertainty models under different collaborative review strategies. We find that an uncertainty-based strategy consistently outperforms the widely used strategy based on toxicity scores, and moreover that the choice of review strategy drastically changes the overall system performance. Our results demonstrate the importance of rigorous metrics for understanding and developing effective moderator-model systems for content moderation, as well as the utility of uncertainty estimation in this domain.

pdf bib
Capturing Covertly Toxic Speech via Crowdsourcing
Alyssa Lees | Daniel Borkan | Ian Kivlichan | Jorge Nario | Tesh Goyal
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing

We study the task of labeling covert or veiled toxicity in online conversations. Prior research has highlighted the difficulty in creating language models that recognize nuanced toxicity such as microaggressions. Our investigations further underscore the difficulty in parsing such labels reliably from raters via crowdsourcing. We introduce an initial dataset, COVERTTOXICITY, which aims to identify and categorize such comments from a refined rater template. Finally, we fine-tune a comment-domain BERT model to classify covertly offensive comments and compare against existing baselines.