Mitigating Biases in Toxic Language Detection through Invariant Rationalization

Yung-Sung Chuang, Mingye Gao, Hongyin Luo, James Glass, Hung-yi Lee, Yun-Nung Chen, Shang-Wen Li


Abstract
Automatic detection of toxic language plays an essential role in protecting social media users, especially minority groups, from verbal abuse. However, biases toward some attributes, including gender, race, and dialect, exist in most training datasets for toxicity detection. The biases make the learned models unfair and can even exacerbate the marginalization of people. Considering that current debiasing methods for general natural language understanding tasks cannot effectively mitigate the biases in the toxicity detectors, we propose to use invariant rationalization (InvRat), a game-theoretic framework consisting of a rationale generator and a predictor, to rule out the spurious correlation of certain syntactic patterns (e.g., identity mentions, dialect) to toxicity labels. We empirically show that our method yields lower false positive rate in both lexical and dialectal attributes than previous debiasing methods.
Anthology ID:
2021.woah-1.12
Volume:
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP | WOAH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–120
Language:
URL:
https://aclanthology.org/2021.woah-1.12
DOI:
10.18653/v1/2021.woah-1.12
Bibkey:
Cite (ACL):
Yung-Sung Chuang, Mingye Gao, Hongyin Luo, James Glass, Hung-yi Lee, Yun-Nung Chen, and Shang-Wen Li. 2021. Mitigating Biases in Toxic Language Detection through Invariant Rationalization. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 114–120, Online. Association for Computational Linguistics.
Cite (Informal):
Mitigating Biases in Toxic Language Detection through Invariant Rationalization (Chuang et al., WOAH 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.woah-1.12.pdf
Code
 voidism/invrat_debias