Lucy Vasserman
2022
Lost in Distillation: A Case Study in Toxicity Modeling
Alyssa Chvasta
|
Alyssa Lees
|
Jeffrey Sorensen
|
Lucy Vasserman
|
Nitesh Goyal
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one. In particular, distillation is of tremendous benefit when it comes to real-world constraints such as serving latency or serving at scale. However, a loss of robustness in language understanding may be hidden in the process and not immediately revealed when looking at high-level evaluation metrics. In this work, we investigate the hidden costs: what is “lost in distillation”, especially in regards to identity-based model bias using the case study of toxicity modeling. With reproducible models using open source training sets, we investigate models distilled from a BERT teacher baseline. Using both open source and proprietary big data models, we investigate these hidden performance costs.
2021
Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation
Ian Kivlichan
|
Zi Lin
|
Jeremiah Liu
|
Lucy Vasserman
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
Content moderation is often performed by a collaboration between humans and machine learning models. However, it is not well understood how to design the collaborative process so as to maximize the combined moderator-model system performance. This work presents a rigorous study of this problem, focusing on an approach that incorporates model uncertainty into the collaborative process. First, we introduce principled metrics to describe the performance of the collaborative system under capacity constraints on the human moderator, quantifying how efficiently the combined system utilizes human decisions. Using these metrics, we conduct a large benchmark study evaluating the performance of state-of-the-art uncertainty models under different collaborative review strategies. We find that an uncertainty-based strategy consistently outperforms the widely used strategy based on toxicity scores, and moreover that the choice of review strategy drastically changes the overall system performance. Our results demonstrate the importance of rigorous metrics for understanding and developing effective moderator-model systems for content moderation, as well as the utility of uncertainty estimation in this domain.
Search
Co-authors
- Ian Kivlichan 1
- Zi Lin 1
- Jeremiah Liu 1
- Alyssa Chvasta 1
- Alyssa Lees 1
- show all...
Venues
- woah2