Robust Conversational Agents against Imperceptible Toxicity Triggers

Ninareh Mehrabi, Ahmad Beirami, Fred Morstatter, Aram Galstyan


Abstract
Warning: this paper contains content that maybe offensive or upsetting.Recent research in Natural Language Processing (NLP) has advanced the development of various toxicity detection models with the intention of identifying and mitigating toxic language from existing systems. Despite the abundance of research in this area, less attention has been given to adversarial attacks that force the system to generate toxic language and the defense against them. Existing work to generate such attacks is either based on human-generated attacks which is costly and not scalable or, in case of automatic attacks, the attack vector does not conform to human-like language, which can be detected using a language model loss. In this work, we propose attacks against conversational agents that are imperceptible, i.e., they fit the conversation in terms of coherency, relevancy, and fluency, while they are effective and scalable, i.e., they can automatically trigger the system into generating toxic language. We then propose a defense mechanism against such attacks which not only mitigates the attack but also attempts to maintain the conversational flow. Through automatic and human evaluations, we show that our defense is effective at avoiding toxic language generation even against imperceptible toxicity triggers while the generated language fits the conversation in terms of coherency and relevancy. Lastly, we establish the generalizability of such a defense mechanism on language generation models beyond conversational agents.
Anthology ID:
2022.naacl-main.204
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2831–2847
Language:
URL:
https://aclanthology.org/2022.naacl-main.204
DOI:
10.18653/v1/2022.naacl-main.204
Bibkey:
Cite (ACL):
Ninareh Mehrabi, Ahmad Beirami, Fred Morstatter, and Aram Galstyan. 2022. Robust Conversational Agents against Imperceptible Toxicity Triggers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2831–2847, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Robust Conversational Agents against Imperceptible Toxicity Triggers (Mehrabi et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.204.pdf
Code
 ninarehm/robust-agents
Data
Wizard of Wikipedia