Explaining Toxic Text via Knowledge Enhanced Text Generation

Rohit Sridhar, Diyi Yang


Abstract
Warning: This paper contains content that is offensive and may be upsetting. Biased or toxic speech can be harmful to various demographic groups. Therefore, it is not only important for models to detect these speech, but to also output explanations of why a given text is toxic. Previous literature has mostly focused on classifying and detecting toxic speech, and existing efforts on explaining stereotypes in toxic speech mainly use standard text generation approaches, resulting in generic and repetitive explanations. Building on these prior works, we introduce a novel knowledge-informed encoder-decoder framework to utilize multiple knowledge sources to generate implications of biased text. Experiments show that our knowledge informed models outperform prior state-of-the-art models significantly, and can generate detailed explanations of stereotypes in toxic speech compared to baselines, both quantitatively and qualitatively.
Anthology ID:
2022.naacl-main.59
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
811–826
Language:
URL:
https://aclanthology.org/2022.naacl-main.59
DOI:
10.18653/v1/2022.naacl-main.59
Bibkey:
Cite (ACL):
Rohit Sridhar and Diyi Yang. 2022. Explaining Toxic Text via Knowledge Enhanced Text Generation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 811–826, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Explaining Toxic Text via Knowledge Enhanced Text Generation (Sridhar & Yang, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.59.pdf
Video:
 https://aclanthology.org/2022.naacl-main.59.mp4
Data
ConceptNetImplicit HateSBIC