MiTTenS: A Dataset for Evaluating Gender Mistranslation

Kevin Robinson, Sneha Kudugunta, Romina Stella, Sunipa Dev, Jasmijn Bastings


Abstract
Translation systems, including foundation models capable of translation, can produce errors that result in gender mistranslation, and such errors can be especially harmful. To measure the extent of such potential harms when translating into and out of English, we introduce a dataset, MiTTenS, covering 26 languages from a variety of language families and scripts, including several traditionally under-represented in digital resources. The dataset is constructed with handcrafted passages that target known failure patterns, longer synthetically generated passages, and natural passages sourced from multiple domains. We demonstrate the usefulness of the dataset by evaluating both neural machine translation systems and foundation models, and show that all systems exhibit gender mistranslation and potential harm, even in high resource languages.
Anthology ID:
2024.emnlp-main.238
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4115–4124
Language:
URL:
https://aclanthology.org/2024.emnlp-main.238/
DOI:
10.18653/v1/2024.emnlp-main.238
Bibkey:
Cite (ACL):
Kevin Robinson, Sneha Kudugunta, Romina Stella, Sunipa Dev, and Jasmijn Bastings. 2024. MiTTenS: A Dataset for Evaluating Gender Mistranslation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4115–4124, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
MiTTenS: A Dataset for Evaluating Gender Mistranslation (Robinson et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.238.pdf
Data:
 2024.emnlp-main.238.data.zip