On Evaluating and Mitigating Gender Biases in Multilingual Settings

Aniket Vashishtha, Kabir Ahuja, Sunayana Sitaram


Abstract
While understanding and removing gender biases in language models has been a long-standing problem in Natural Language Processing, prior research work has primarily been limited to English. In this work, we investigate some of the challenges with evaluating and mitigating biases in multilingual settings which stem from a lack of existing benchmarks and resources for bias evaluation beyond English especially for non-western context. In this paper, we first create a benchmark for evaluating gender biases in pre-trained masked language models by extending DisCo to different Indian languages using human annotations. We extend various debiasing methods to work beyond English and evaluate their effectiveness for SOTA massively multilingual models on our proposed metric. Overall, our work highlights the challenges that arise while studying social biases in multilingual settings and provides resources as well as mitigation techniques to take a step toward scaling to more languages.
Anthology ID:
2023.findings-acl.21
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
307–318
Language:
URL:
https://aclanthology.org/2023.findings-acl.21
DOI:
10.18653/v1/2023.findings-acl.21
Bibkey:
Cite (ACL):
Aniket Vashishtha, Kabir Ahuja, and Sunayana Sitaram. 2023. On Evaluating and Mitigating Gender Biases in Multilingual Settings. In Findings of the Association for Computational Linguistics: ACL 2023, pages 307–318, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
On Evaluating and Mitigating Gender Biases in Multilingual Settings (Vashishtha et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.21.pdf