%0 Conference Proceedings %T Bot-Adversarial Dialogue for Safe Conversational Agents %A Xu, Jing %A Ju, Da %A Li, Margaret %A Boureau, Y-Lan %A Weston, Jason %A Dinan, Emily %Y Toutanova, Kristina %Y Rumshisky, Anna %Y Zettlemoyer, Luke %Y Hakkani-Tur, Dilek %Y Beltagy, Iz %Y Bethard, Steven %Y Cotterell, Ryan %Y Chakraborty, Tanmoy %Y Zhou, Yichao %S Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2021 %8 June %I Association for Computational Linguistics %C Online %F xu-etal-2021-bot %X Conversational agents trained on large unlabeled corpora of human interactions will learn patterns and mimic behaviors therein, which include offensive or otherwise toxic behavior. We introduce a new human-and-model-in-the-loop framework for evaluating the toxicity of such models, and compare a variety of existing methods in both the cases of non-adversarial and adversarial users that expose their weaknesses. We then go on to propose two novel methods for safe conversational agents, by either training on data from our new human-and-model-in-the-loop framework in a two-stage system, or ”baking-in” safety to the generative model itself. We find our new techniques are (i) safer than existing models; while (ii) maintaining usability metrics such as engagingness relative to state-of-the-art chatbots. In contrast, we expose serious safety issues in existing standard systems like GPT2, DialoGPT, and BlenderBot. %R 10.18653/v1/2021.naacl-main.235 %U https://aclanthology.org/2021.naacl-main.235 %U https://doi.org/10.18653/v1/2021.naacl-main.235 %P 2950-2968