Examining Covert Gender Bias: A Case Study in Turkish and English Machine Translation Models

Chloe Ciora, Nur Iren, Malihe Alikhani


Abstract
As Machine Translation (MT) has become increasingly more powerful, accessible, and widespread, the potential for the perpetuation of bias has grown alongside its advances. While overt indicators of bias have been studied in machine translation, we argue that covert biases expose a problem that is further entrenched. Through the use of the gender-neutral language Turkish and the gendered language English, we examine cases of both overt and covert gender bias in MT models. Specifically, we introduce a method to investigate asymmetrical gender markings. We also assess bias in the attribution of personhood and examine occupational and personality stereotypes through overt bias indicators in MT models. Our work explores a deeper layer of bias in MT models and demonstrates the continued need for language-specific, interdisciplinary methodology in MT model development.
Anthology ID:
2021.inlg-1.7
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Editors:
Anya Belz, Angela Fan, Ehud Reiter, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
55–63
Language:
URL:
https://aclanthology.org/2021.inlg-1.7
DOI:
10.18653/v1/2021.inlg-1.7
Bibkey:
Cite (ACL):
Chloe Ciora, Nur Iren, and Malihe Alikhani. 2021. Examining Covert Gender Bias: A Case Study in Turkish and English Machine Translation Models. In Proceedings of the 14th International Conference on Natural Language Generation, pages 55–63, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Cite (Informal):
Examining Covert Gender Bias: A Case Study in Turkish and English Machine Translation Models (Ciora et al., INLG 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.inlg-1.7.pdf
Code
 NurIren/Gender-Bias-in-TR-to-EN-MT-Models