%0 Conference Proceedings %T Choose Your Lenses: Flaws in Gender Bias Evaluation %A Orgad, Hadas %A Belinkov, Yonatan %Y Hardmeier, Christian %Y Basta, Christine %Y Costa-jussà, Marta R. %Y Stanovsky, Gabriel %Y Gonen, Hila %S Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, Washington %F orgad-belinkov-2022-choose %X Considerable efforts to measure and mitigate gender bias in recent years have led to the introduction of an abundance of tasks, datasets, and metrics used in this vein. In this position paper, we assess the current paradigm of gender bias evaluation and identify several flaws in it. First, we highlight the importance of extrinsic bias metrics that measure how a model’s performance on some task is affected by gender, as opposed to intrinsic evaluations of model representations, which are less strongly connected to specific harms to people interacting with systems. We find that only a few extrinsic metrics are measured in most studies, although more can be measured. Second, we find that datasets and metrics are often coupled, and discuss how their coupling hinders the ability to obtain reliable conclusions, and how one may decouple them. We then investigate how the choice of the dataset and its composition, as well as the choice of the metric, affect bias measurement, finding significant variations across each of them. Finally, we propose several guidelines for more reliable gender bias evaluation. %R 10.18653/v1/2022.gebnlp-1.17 %U https://aclanthology.org/2022.gebnlp-1.17 %U https://doi.org/10.18653/v1/2022.gebnlp-1.17 %P 151-167