%0 Conference Proceedings %T SaFeRDialogues: Taking Feedback Gracefully after Conversational Safety Failures %A Ung, Megan %A Xu, Jing %A Boureau, Y-Lan %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F ung-etal-2022-saferdialogues %X Current open-domain conversational models can easily be made to talk in inadequate ways. Online learning from conversational feedback given by the conversation partner is a promising avenue for a model to improve and adapt, so as to generate fewer of these safety failures. However, current state-of-the-art models tend to react to feedback with defensive or oblivious responses. This makes for an unpleasant experience and may discourage conversation partners from giving feedback in the future. This work proposes SaFeRDialogues, a task and dataset of graceful responses to conversational feedback about safety failures. We collect a dataset of 8k dialogues demonstrating safety failures, feedback signaling them, and a response acknowledging the feedback. We show how fine-tuning on this dataset results in conversations that human raters deem considerably more likely to lead to a civil conversation, without sacrificing engagingness or general conversational ability. %R 10.18653/v1/2022.acl-long.447 %U https://aclanthology.org/2022.acl-long.447 %U https://doi.org/10.18653/v1/2022.acl-long.447 %P 6462-6481