Fairness-aware Class Imbalanced Learning

Shivashankar Subramanian, Afshin Rahimi, Timothy Baldwin, Trevor Cohn, Lea Frermann


Abstract
Class imbalance is a common challenge in many NLP tasks, and has clear connections to bias, in that bias in training data often leads to higher accuracy for majority groups at the expense of minority groups. However there has traditionally been a disconnect between research on class-imbalanced learning and mitigating bias, and only recently have the two been looked at through a common lens. In this work we evaluate long-tail learning methods for tweet sentiment and occupation classification, and extend a margin-loss based approach with methods to enforce fairness. We empirically show through controlled experiments that the proposed approaches help mitigate both class imbalance and demographic biases.
Anthology ID:
2021.emnlp-main.155
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2045–2051
Language:
URL:
https://aclanthology.org/2021.emnlp-main.155
DOI:
10.18653/v1/2021.emnlp-main.155
Bibkey:
Cite (ACL):
Shivashankar Subramanian, Afshin Rahimi, Timothy Baldwin, Trevor Cohn, and Lea Frermann. 2021. Fairness-aware Class Imbalanced Learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2045–2051, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Fairness-aware Class Imbalanced Learning (Subramanian et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.155.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.155.mp4