Debiasing Pre-Trained Language Models via Efficient Fine-Tuning

Michael Gira, Ruisu Zhang, Kangwook Lee


Abstract
An explosion in the popularity of transformer-based language models (such as GPT-3, BERT, RoBERTa, and ALBERT) has opened the doors to new machine learning applications involving language modeling, text generation, and more. However, recent scrutiny reveals that these language models contain inherent biases towards certain demographics reflected in their training data. While research has tried mitigating this problem, existing approaches either fail to remove the bias completely, degrade performance (“catastrophic forgetting”), or are costly to execute. This work examines how to reduce gender bias in a GPT-2 language model by fine-tuning less than 1% of its parameters. Through quantitative benchmarks, we show that this is a viable way to reduce prejudice in pre-trained language models while remaining cost-effective at scale.
Anthology ID:
2022.ltedi-1.8
Volume:
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Bharathi Raja Chakravarthi, B Bharathi, John P McCrae, Manel Zarrouk, Kalika Bali, Paul Buitelaar
Venue:
LTEDI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
59–69
Language:
URL:
https://aclanthology.org/2022.ltedi-1.8
DOI:
10.18653/v1/2022.ltedi-1.8
Bibkey:
Cite (ACL):
Michael Gira, Ruisu Zhang, and Kangwook Lee. 2022. Debiasing Pre-Trained Language Models via Efficient Fine-Tuning. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 59–69, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Debiasing Pre-Trained Language Models via Efficient Fine-Tuning (Gira et al., LTEDI 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.ltedi-1.8.pdf
Video:
 https://aclanthology.org/2022.ltedi-1.8.mp4
Code
 michaelgira23/debiasing-lms
Data
CrowS-PairsStereoSetWinoBias