Systematic Evaluation of Predictive Fairness

Xudong Han, Aili Shen, Trevor Cohn, Timothy Baldwin, Lea Frermann


Abstract
Mitigating bias in training on biased datasets is an important open problem. Several techniques have been proposed, however the typical evaluation regime is very limited, considering very narrow data conditions. For instance, the effect of target class imbalance and stereotyping is under-studied. To address this gap, we examine the performance of various debiasing methods across multiple tasks, spanning binary classification (Twitter sentiment), multi-class classification (profession prediction), and regression (valence prediction). Through extensive experimentation, we find that data conditions have a strong influence on relative model performance, and that general conclusions cannot be drawn about method efficacy when evaluating only on standard datasets, as is current practice in fairness research.
Anthology ID:
2022.aacl-main.6
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2022
Address:
Online only
Editors:
Yulan He, Heng Ji, Sujian Li, Yang Liu, Chua-Hui Chang
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
68–81
Language:
URL:
https://aclanthology.org/2022.aacl-main.6
DOI:
Bibkey:
Cite (ACL):
Xudong Han, Aili Shen, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2022. Systematic Evaluation of Predictive Fairness. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 68–81, Online only. Association for Computational Linguistics.
Cite (Informal):
Systematic Evaluation of Predictive Fairness (Han et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.aacl-main.6.pdf