Daehyun Kim
2024
Contrastive Learning as a Polarizer: Mitigating Gender Bias by Fair and Biased sentences
Kyungmin Park
|
Sihyun Oh
|
Daehyun Kim
|
Juae Kim
Findings of the Association for Computational Linguistics: NAACL 2024
Recently, language models have accelerated the improvement in natural language processing. However, recent studies have highlighted a significant issue: social biases inherent in training data can lead models to learn and propagate these biases. In this study, we propose a contrastive learning method for bias mitigation, utilizing anchor points to push further negatives and pull closer positives within the representation space. This approach employs stereotypical data as negatives and stereotype-free data as positives, enhancing debiasing performance. Our model attained state-of-the-art performance in the ICAT score on the StereoSet, a benchmark for measuring bias in models. In addition, we observed that effective debiasing is achieved through an awareness of biases, as evidenced by improved hate speech detection scores. The implementation code and trained models are available at https://github.com/HUFS-NLP/CL_Polarizer.git.
Search