Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg


Abstract
The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present Iterative Null-space Projection (INLP), a novel method for removing information from neural representations. Our method is based on repeated training of linear classifiers that predict a certain property we aim to remove, followed by projection of the representations on their null-space. By doing so, the classifiers become oblivious to that target property, making it hard to linearly separate the data according to it. While applicable for multiple uses, we evaluate our method on bias and fairness use-cases, and show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.
Anthology ID:
2020.acl-main.647
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7237–7256
Language:
URL:
https://aclanthology.org/2020.acl-main.647
DOI:
10.18653/v1/2020.acl-main.647
Bibkey:
Cite (ACL):
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics.
Cite (Informal):
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection (Ravfogel et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.647.pdf
Video:
 http://slideslive.com/38929453
Code
 Shaul1321/nullspace_projection