CODET: A Benchmark for Contrastive Dialectal Evaluation of Machine Translation

Md Mahfuz Ibn Alam, Sina Ahmadi, Antonios Anastasopoulos


Abstract
Neural machine translation (NMT) systems exhibit limited robustness in handling source-side linguistic variations. Their performance tends to degrade when faced with even slight deviations in language usage, such as different domains or variations introduced by second-language speakers. It is intuitive to extend this observation to encompass dialectal variations as well, but the work allowing the community to evaluate MT systems on this dimension is limited. To alleviate this issue, we compile and release CODET, a contrastive dialectal benchmark encompassing 891 different variations from twelve different languages. We also quantitatively demonstrate the challenges large MT models face in effectively translating dialectal variants. All the data and code have been released.
Anthology ID:
2024.findings-eacl.125
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1790–1859
Language:
URL:
https://aclanthology.org/2024.findings-eacl.125
DOI:
Bibkey:
Cite (ACL):
Md Mahfuz Ibn Alam, Sina Ahmadi, and Antonios Anastasopoulos. 2024. CODET: A Benchmark for Contrastive Dialectal Evaluation of Machine Translation. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1790–1859, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
CODET: A Benchmark for Contrastive Dialectal Evaluation of Machine Translation (Alam et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.125.pdf