Evaluating Gender Bias in Machine Translation

Gabriel Stanovsky, Noah A. Smith, Luke Zettlemoyer


Abstract
We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., “The doctor asked the nurse to help her in the operation”). We devise an automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our analyses show that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages. Our data and code are publicly available at https://github.com/gabrielStanovsky/mt_gender.
Anthology ID:
P19-1164
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1679–1684
Language:
URL:
https://aclanthology.org/P19-1164
DOI:
10.18653/v1/P19-1164
Bibkey:
Cite (ACL):
Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating Gender Bias in Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679–1684, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Evaluating Gender Bias in Machine Translation (Stanovsky et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1164.pdf
Video:
 https://vimeo.com/384485671
Data
WinoBias