Characterizing Stereotypical Bias from Privacy-preserving Pre-Training

Stefan Arnold, Rene Gröbner, Annika Schreiner


Abstract
Differential Privacy (DP) can be applied to raw text by exploiting the spatial arrangement of words in an embedding space. We investigate the implications of such text privatization on Language Models (LMs) and their tendency towards stereotypical associations. Since previous studies documented that linguistic proficiency correlates with stereotypical bias, one could assume that techniques for text privatization, which are known to degrade language modeling capabilities, would cancel out undesirable biases. By testing BERT models trained on texts containing biased statements primed with varying degrees of privacy, our study reveals that while stereotypical bias generally diminishes when privacy is tightened, text privatization does not uniformly equate to diminishing bias across all social domains. This highlights the need for careful diagnosis of bias in LMs that undergo text privatization.
Anthology ID:
2024.privatenlp-1.3
Volume:
Proceedings of the Fifth Workshop on Privacy in Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Ivan Habernal, Sepideh Ghanavati, Abhilasha Ravichander, Vijayanta Jain, Patricia Thaine, Timour Igamberdiev, Niloofar Mireshghallah, Oluwaseyi Feyisetan
Venues:
PrivateNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20–28
Language:
URL:
https://aclanthology.org/2024.privatenlp-1.3
DOI:
Bibkey:
Cite (ACL):
Stefan Arnold, Rene Gröbner, and Annika Schreiner. 2024. Characterizing Stereotypical Bias from Privacy-preserving Pre-Training. In Proceedings of the Fifth Workshop on Privacy in Natural Language Processing, pages 20–28, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Characterizing Stereotypical Bias from Privacy-preserving Pre-Training (Arnold et al., PrivateNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.privatenlp-1.3.pdf