No offence, Bert - I insult only humans! Multilingual sentence-level attack on toxicity detection networks

Sergey Berezin, Reza Farahbakhsh, Noel Crespi


Abstract
We introduce a simple yet efficient sentence-level attack on black-box toxicity detector models. By adding several positive words or sentences to the end of a hateful message, we are able to change the prediction of a neural network and pass the toxicity detection system check. This approach is shown to be working on seven languages from three different language families. We also describe the defence mechanism against the aforementioned attack and discuss its limitations.
Anthology ID:
2023.findings-emnlp.155
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2362–2369
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.155
DOI:
10.18653/v1/2023.findings-emnlp.155
Bibkey:
Cite (ACL):
Sergey Berezin, Reza Farahbakhsh, and Noel Crespi. 2023. No offence, Bert - I insult only humans! Multilingual sentence-level attack on toxicity detection networks. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2362–2369, Singapore. Association for Computational Linguistics.
Cite (Informal):
No offence, Bert - I insult only humans! Multilingual sentence-level attack on toxicity detection networks (Berezin et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.155.pdf