Bipol: Multi-Axes Evaluation of Bias with Explainability in Benchmark Datasets

Tosin Adewumi, Isabella Södergren, Lama Alkhaled, Sana Al-azzawi, Foteini Simistira Liwicki, Marcus Liwicki


Abstract
We investigate five English NLP benchmark datasets (on the superGLUE leaderboard) and two Swedish datasets for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), Recognising Textual Entailment (RTE), Swedish CB, and SWEDN. Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to estimate and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large Swedish bias-labelled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We make the codes, model, and new dataset publicly available.
Anthology ID:
2023.ranlp-1.1
Volume:
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/2023.ranlp-1.1
DOI:
Bibkey:
Cite (ACL):
Tosin Adewumi, Isabella Södergren, Lama Alkhaled, Sana Al-azzawi, Foteini Simistira Liwicki, and Marcus Liwicki. 2023. Bipol: Multi-Axes Evaluation of Bias with Explainability in Benchmark Datasets. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 1–10, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Bipol: Multi-Axes Evaluation of Bias with Explainability in Benchmark Datasets (Adewumi et al., RANLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ranlp-1.1.pdf