Local Contrastive Editing of Gender Stereotypes

Marlene Lutz, Rochelle Choenni, Markus Strohmaier, Anne Lauscher


Abstract
Stereotypical bias encoded in language models (LMs) poses a threat to safe language technology, yet our understanding of how bias manifests in the parameters of LMs remains incomplete. We introduce local contrastive editing that enables the localization and editing of a subset of weights in a target model in relation to a reference model. We deploy this approach to identify and modify subsets of weights that are associated with gender stereotypes in LMs. Through a series of experiments we demonstrate that local contrastive editing can precisely localize and control a small subset (< 0.5%) of weights that encode gender bias. Our work (i) advances our understanding of how stereotypical biases can manifest in the parameter space of LMs and (ii) opens up new avenues for developing parameter-efficient strategies for controlling model properties in a contrastive manner.
Anthology ID:
2024.emnlp-main.1197
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21474–21493
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1197/
DOI:
10.18653/v1/2024.emnlp-main.1197
Bibkey:
Cite (ACL):
Marlene Lutz, Rochelle Choenni, Markus Strohmaier, and Anne Lauscher. 2024. Local Contrastive Editing of Gender Stereotypes. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21474–21493, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Local Contrastive Editing of Gender Stereotypes (Lutz et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1197.pdf