Evaluating Debiasing Techniques for Intersectional Biases

Shivashankar Subramanian, Xudong Han, Timothy Baldwin, Trevor Cohn, Lea Frermann


Abstract
Bias is pervasive for NLP models, motivating the development of automatic debiasing techniques. Evaluation of NLP debiasing methods has largely been limited to binary attributes in isolation, e.g., debiasing with respect to binary gender or race, however many corpora involve multiple such attributes, possibly with higher cardinality. In this paper we argue that a truly fair model must consider ‘gerrymandering’ groups which comprise not only single attributes, but also intersectional groups. We evaluate a form of bias-constrained model which is new to NLP, as well an extension of the iterative nullspace projection technique which can handle multiple identities.
Anthology ID:
2021.emnlp-main.193
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2492–2498
Language:
URL:
https://aclanthology.org/2021.emnlp-main.193
DOI:
10.18653/v1/2021.emnlp-main.193
Bibkey:
Cite (ACL):
Shivashankar Subramanian, Xudong Han, Timothy Baldwin, Trevor Cohn, and Lea Frermann. 2021. Evaluating Debiasing Techniques for Intersectional Biases. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2492–2498, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Evaluating Debiasing Techniques for Intersectional Biases (Subramanian et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.193.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.193.mp4