Does Context Help Mitigate Gender Bias in Neural Machine Translation?

Harritxu Gete, Thierry Etchegoyhen


Abstract
Neural Machine Translation models tend to perpetuate gender bias present in their training data distribution. Context-aware models have been previously suggested as a means to mitigate this type of bias. In this work, we examine this claim by analysing in detail the translation of stereotypical professions in English to German, and translation with non-informative context in Basque to Spanish. Our results show that, although context-aware models can significantly enhance translation accuracy for feminine terms, they can still maintain or even amplify gender bias. These results highlight the need for more fine-grained approaches to bias mitigation in Neural Machine Translation.
Anthology ID:
2024.findings-emnlp.868
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14788–14794
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.868
DOI:
Bibkey:
Cite (ACL):
Harritxu Gete and Thierry Etchegoyhen. 2024. Does Context Help Mitigate Gender Bias in Neural Machine Translation?. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 14788–14794, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Does Context Help Mitigate Gender Bias in Neural Machine Translation? (Gete & Etchegoyhen, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.868.pdf