Comparing Intrinsic Gender Bias Evaluation Measures without using Human Annotated Examples

Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki


Abstract
Numerous types of social biases have been identified in pre-trained language models (PLMs), and various intrinsic bias evaluation measures have been proposed for quantifying those social biases. Prior works have relied on human annotated examples to compare existing intrinsic bias evaluation measures. However, this approach is not easily adaptable to different languages nor amenable to large scale evaluations due to the costs and difficulties when recruiting human annotators. To overcome this limitation, we propose a method to compare intrinsic gender bias evaluation measures without relying on human-annotated examples. Specifically, we create multiple bias-controlled versions of PLMs using varying amounts of male vs. female gendered sentences, mined automatically from an unannotated corpus using gender-related word lists. Next, each bias-controlled PLM is evaluated using an intrinsic bias evaluation measure, and the rank correlation between the computed bias scores and the gender proportions used to fine-tune the PLMs is computed. Experiments on multiple corpora and PLMs repeatedly show that the correlations reported by our proposed method that does not require human annotated examples are comparable to those computed using human annotated examples in prior work.
Anthology ID:
2023.eacl-main.209
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2857–2863
Language:
URL:
https://aclanthology.org/2023.eacl-main.209
DOI:
10.18653/v1/2023.eacl-main.209
Bibkey:
Cite (ACL):
Masahiro Kaneko, Danushka Bollegala, and Naoaki Okazaki. 2023. Comparing Intrinsic Gender Bias Evaluation Measures without using Human Annotated Examples. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2857–2863, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Comparing Intrinsic Gender Bias Evaluation Measures without using Human Annotated Examples (Kaneko et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.209.pdf
Video:
 https://aclanthology.org/2023.eacl-main.209.mp4