%0 Conference Proceedings %T Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning %A Joniak, Przemyslaw %A Aizawa, Akiko %Y Hardmeier, Christian %Y Basta, Christine %Y Costa-jussà, Marta R. %Y Stanovsky, Gabriel %Y Gonen, Hila %S Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, Washington %F joniak-aizawa-2022-gender %X Language model debiasing has emerged as an important field of study in the NLP community. Numerous debiasing techniques were proposed, but bias ablation remains an unaddressed issue. We demonstrate a novel framework for inspecting bias in pre-trained transformer-based language models via movement pruning. Given a model and a debiasing objective, our framework finds a subset of the model containing less bias than the original model. We implement our framework by pruning the model while fine-tuning it on the debasing objective. Optimized are only the pruning scores – parameters coupled with the model’s weights that act as gates. We experiment with pruning attention heads, an important building block of transformers: we prune square blocks, as well as establish a new way of pruning the entire heads. Lastly, we demonstrate the usage of our framework using gender bias, and based on our findings, we propose an improvement to an existing debiasing method. Additionally, we re-discover a bias-performance trade-off: the better the model performs, the more bias it contains. %R 10.18653/v1/2022.gebnlp-1.6 %U https://aclanthology.org/2022.gebnlp-1.6 %U https://doi.org/10.18653/v1/2022.gebnlp-1.6 %P 67-73