Machine Translation Evaluation Benchmark for Wu Chinese: Workflow and Analysis

Hongjian Yu, Yiming Shi, Zherui Zhou, Christopher Haberland


Abstract
We introduce a FLORES+ dataset as an evaluation benchmark for modern Wu Chinese machine translation models and showcase its compatibility with existing Wu data. Wu Chinese is mutually unintelligible with other Sinitic languages such as Mandarin and Yue (Cantonese), but uses a set of Hanzi (Chinese characters) that profoundly overlaps with others. The population of Wu speakers is the second largest among languages in China, but the language has been suffering from significant drop in usage especially among the younger generations. We identify Wu Chinese as a textually low-resource language and address challenges for its machine translation models. Our contributions include: (1) an open-source, manually translated dataset, (2) full documentations on the process of dataset creation and validation experiments, (3) preliminary tools for Wu Chinese normalization and segmentation, and (4) benefits and limitations of our dataset, as well as implications to other low-resource languages.
Anthology ID:
2024.wmt-1.47
Volume:
Proceedings of the Ninth Conference on Machine Translation
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
600–605
Language:
URL:
https://aclanthology.org/2024.wmt-1.47
DOI:
Bibkey:
Cite (ACL):
Hongjian Yu, Yiming Shi, Zherui Zhou, and Christopher Haberland. 2024. Machine Translation Evaluation Benchmark for Wu Chinese: Workflow and Analysis. In Proceedings of the Ninth Conference on Machine Translation, pages 600–605, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Machine Translation Evaluation Benchmark for Wu Chinese: Workflow and Analysis (Yu et al., WMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wmt-1.47.pdf