Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification

Xiaolei Huang


Abstract
Existing approaches to mitigate demographic biases evaluate on monolingual data, however, multilingual data has not been examined. In this work, we treat the gender as domains (e.g., male vs. female) and present a standard domain adaptation model to reduce the gender bias and improve performance of text classifiers under multilingual settings. We evaluate our approach on two text classification tasks, hate speech detection and rating prediction, and demonstrate the effectiveness of our approach with three fair-aware baselines.
Anthology ID:
2022.naacl-main.52
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
717–723
Language:
URL:
https://aclanthology.org/2022.naacl-main.52
DOI:
10.18653/v1/2022.naacl-main.52
Bibkey:
Cite (ACL):
Xiaolei Huang. 2022. Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 717–723, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification (Huang, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.52.pdf
Software:
 2022.naacl-main.52.software.zip
Video:
 https://aclanthology.org/2022.naacl-main.52.mp4
Code
 xiaoleihuang/domainfairness