Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators through a User-Centric Method

Yang Cao, Lovely-Frances Domingo, Sarah Gilbert, Michelle Mazurek, Katie Shilton, Hal Daumé Iii


Abstract
Extensive efforts in automated approaches for content moderation have been focused on developing models to identify toxic, offensive, and hateful content with the aim of lightening the load for moderators. Yet, it remains uncertain whether improvements on those tasks have truly addressed moderators’ needs in accomplishing their work. In this paper, we surface gaps between past research efforts that have aimed to provide automation for aspects of content moderation and the needs of volunteer content moderators, regarding identifying violations of various moderation rules. To do so, we conduct a model review on Hugging Face to reveal the availability of models to cover various moderation rules and guidelines from three exemplar forums. We further put state-of-the-art LLMs to the test, evaluating how well these models perform in flagging violations of platform rules from one particular forum. Finally, we conduct a user survey study with volunteer moderators to gain insight into their perspectives on useful moderation models. Overall, we observe a non trivial gap, as missing developed models and LLMs exhibit moderate to low performance on a significant portion of the rules. Moderators’ reports provide guides for future work on developing moderation assistant models.
Anthology ID:
2024.emnlp-main.209
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3567–3587
Language:
URL:
https://aclanthology.org/2024.emnlp-main.209
DOI:
Bibkey:
Cite (ACL):
Yang Cao, Lovely-Frances Domingo, Sarah Gilbert, Michelle Mazurek, Katie Shilton, and Hal Daumé Iii. 2024. Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators through a User-Centric Method. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3567–3587, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators through a User-Centric Method (Cao et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.209.pdf