An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models

Zhongbin Xie, Thomas Lukasiewicz


Abstract
The increasingly large size of modern pre-trained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. In this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct extensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) are effective in mitigating gender bias, where adapter tuning is consistently the most effective one and prompt tuning is more suitable for GPT-2 than BERT, (ii) areless effective when it comes to racial and religious bias, which may be attributed to the limitations of CDA, and (iii) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, as well as maintain the internal knowledge in BERT and GPT-2, evaluated via fact retrieval and downstream fine-tuning.
Anthology ID:
2023.acl-long.876
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15730–15745
Language:
URL:
https://aclanthology.org/2023.acl-long.876
DOI:
10.18653/v1/2023.acl-long.876
Bibkey:
Cite (ACL):
Zhongbin Xie and Thomas Lukasiewicz. 2023. An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15730–15745, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models (Xie & Lukasiewicz, ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.876.pdf
Video:
 https://aclanthology.org/2023.acl-long.876.mp4