Debiasing Text Safety Classifiers through a Fairness-Aware Ensemble

Olivia Sturman, Aparna R Joshi, Bhaktipriya Radharapu, Piyush Kumar, Renee Shelby


Abstract
Increasing use of large language models (LLMs) demand performant guardrails to ensure the safety of inputs and outputs of LLMs. When these safeguards are trained on imbalanced data, they can learn the societal biases. We present a light-weight, post-processing method for mitigating counterfactual fairness in closed-source text safety classifiers. Our approach involves building an ensemble that not only outperforms the input classifiers and policy-aligns them, but also acts as a debiasing regularizer. We introduce two threshold-agnostic metrics to assess the counterfactual fairness of a model, and demonstrate how combining these metrics with Fair Data Reweighting (FDW) helps mitigate biases. We create an expanded Open AI dataset, and a new templated LLM-generated dataset based on user-prompts, both of which are counterfactually balanced across identity groups and cover four key areas of safety; we will work towards publicly releasing these datasets. Our results show that our approach improves counterfactual fairness with minimal impact on model performance.
Anthology ID:
2024.emnlp-industry.16
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Month:
November
Year:
2024
Address:
Miami, Florida, US
Editors:
Franck Dernoncourt, Daniel Preoţiuc-Pietro, Anastasia Shimorina
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
199–214
Language:
URL:
https://aclanthology.org/2024.emnlp-industry.16
DOI:
Bibkey:
Cite (ACL):
Olivia Sturman, Aparna R Joshi, Bhaktipriya Radharapu, Piyush Kumar, and Renee Shelby. 2024. Debiasing Text Safety Classifiers through a Fairness-Aware Ensemble. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 199–214, Miami, Florida, US. Association for Computational Linguistics.
Cite (Informal):
Debiasing Text Safety Classifiers through a Fairness-Aware Ensemble (Sturman et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-industry.16.pdf
Presentation:
 2024.emnlp-industry.16.presentation.pdf