Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning

Przemyslaw Joniak, Akiko Aizawa


Abstract
Language model debiasing has emerged as an important field of study in the NLP community. Numerous debiasing techniques were proposed, but bias ablation remains an unaddressed issue. We demonstrate a novel framework for inspecting bias in pre-trained transformer-based language models via movement pruning. Given a model and a debiasing objective, our framework finds a subset of the model containing less bias than the original model. We implement our framework by pruning the model while fine-tuning it on the debasing objective. Optimized are only the pruning scores – parameters coupled with the model’s weights that act as gates. We experiment with pruning attention heads, an important building block of transformers: we prune square blocks, as well as establish a new way of pruning the entire heads. Lastly, we demonstrate the usage of our framework using gender bias, and based on our findings, we propose an improvement to an existing debiasing method. Additionally, we re-discover a bias-performance trade-off: the better the model performs, the more bias it contains.
Anthology ID:
2022.gebnlp-1.6
Volume:
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
July
Year:
2022
Address:
Seattle, Washington
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
67–73
Language:
URL:
https://aclanthology.org/2022.gebnlp-1.6
DOI:
10.18653/v1/2022.gebnlp-1.6
Bibkey:
Cite (ACL):
Przemyslaw Joniak and Akiko Aizawa. 2022. Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 67–73, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning (Joniak & Aizawa, GeBNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.gebnlp-1.6.pdf
Video:
 https://aclanthology.org/2022.gebnlp-1.6.mp4
Code
 kainoj/pruning-bias
Data
CoLAGLUE