%0 Conference Proceedings %T Detoxifying Language Models Risks Marginalizing Minority Voices %A Xu, Albert %A Pathak, Eshaan %A Wallace, Eric %A Gururangan, Suchin %A Sap, Maarten %A Klein, Dan %Y Toutanova, Kristina %Y Rumshisky, Anna %Y Zettlemoyer, Luke %Y Hakkani-Tur, Dilek %Y Beltagy, Iz %Y Bethard, Steven %Y Cotterell, Ryan %Y Chakraborty, Tanmoy %Y Zhou, Yichao %S Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2021 %8 June %I Association for Computational Linguistics %C Online %F xu-etal-2021-detoxifying %X Language models (LMs) must be both safe and equitable to be responsibly deployed in practice. With safety in mind, numerous detoxification techniques (e.g., Dathathri et al. 2020; Krause et al. 2020) have been proposed to mitigate toxic LM generations. In this work, we show that these detoxification techniques hurt equity: they decrease the utility of LMs on language used by marginalized groups (e.g., African-American English and minority identity mentions). In particular, we perform automatic and human evaluations of text generation quality when LMs are conditioned on inputs with different dialects and group identifiers. We find that detoxification makes LMs more brittle to distribution shift, especially on language used by marginalized groups. We identify that these failures stem from detoxification methods exploiting spurious correlations in toxicity datasets. Overall, our results highlight the tension between the controllability and distributional robustness of LMs. %R 10.18653/v1/2021.naacl-main.190 %U https://aclanthology.org/2021.naacl-main.190 %U https://doi.org/10.18653/v1/2021.naacl-main.190 %P 2390-2397