Fortifying Toxic Speech Detectors Against Veiled Toxicity

Xiaochuang Han, Yulia Tsvetkov


Abstract
Modern toxic speech detectors are incompetent in recognizing disguised offensive language, such as adversarial attacks that deliberately avoid known toxic lexicons, or manifestations of implicit bias. Building a large annotated dataset for such veiled toxicity can be very expensive. In this work, we propose a framework aimed at fortifying existing toxic speech detectors without a large labeled corpus of veiled toxicity. Just a handful of probing examples are used to surface orders of magnitude more disguised offenses. We augment the toxic speech detector’s training data with these discovered offensive examples, thereby making it more robust to veiled toxicity while preserving its utility in detecting overt toxicity.
Anthology ID:
2020.emnlp-main.622
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7732–7739
Language:
URL:
https://aclanthology.org/2020.emnlp-main.622
DOI:
10.18653/v1/2020.emnlp-main.622
Bibkey:
Cite (ACL):
Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying Toxic Speech Detectors Against Veiled Toxicity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7732–7739, Online. Association for Computational Linguistics.
Cite (Informal):
Fortifying Toxic Speech Detectors Against Veiled Toxicity (Han & Tsvetkov, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.622.pdf
Video:
 https://slideslive.com/38939156
Code
 xhan77/veiled-toxicity-detection
Data
SBIC