Yiming Shi
2024
Machine Translation Evaluation Benchmark for Wu Chinese: Workflow and Analysis
Hongjian Yu
|
Yiming Shi
|
Zherui Zhou
|
Christopher Haberland
Proceedings of the Ninth Conference on Machine Translation
We introduce a FLORES+ dataset as an evaluation benchmark for modern Wu Chinese machine translation models and showcase its compatibility with existing Wu data. Wu Chinese is mutually unintelligible with other Sinitic languages such as Mandarin and Yue (Cantonese), but uses a set of Hanzi (Chinese characters) that profoundly overlaps with others. The population of Wu speakers is the second largest among languages in China, but the language has been suffering from significant drop in usage especially among the younger generations. We identify Wu Chinese as a textually low-resource language and address challenges for its machine translation models. Our contributions include: (1) an open-source, manually translated dataset, (2) full documentations on the process of dataset creation and validation experiments, (3) preliminary tools for Wu Chinese normalization and segmentation, and (4) benefits and limitations of our dataset, as well as implications to other low-resource languages.
2023
Chat Disentanglement: Data for New Domains and Methods for More Accurate Annotation
Sai R. Gouravajhala
|
Andrew M. Vernier
|
Yiming Shi
|
Zihan Li
|
Mark S. Ackerman
|
Jonathan K. Kummerfeld
Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association
Conversation disentanglement is the task of taking a log of intertwined conversations from a shared channel and breaking the log into individual conversations. The standard datasets for disentanglement are in a single domain and were annotated by linguistics experts with careful training for the task. In this paper, we introduce the first multi-domain dataset and a study of annotation by people without linguistics expertise or extensive training. We experiment with several variations in interfaces, conducting user studies with domain experts and crowd workers. We also test a hypothesis from prior work that link-based annotation is more accurate, finding that it actually has comparable accuracy to set-based annotation. Our new dataset will support the development of more useful systems for this task, and our experimental findings suggest that users are capable of improving the usefulness of these systems by accurately annotating their own data.
Search
Co-authors
- Sai R. Gouravajhala 1
- Andrew M. Vernier 1
- Zihan Li 1
- Mark S. Ackerman 1
- Jonathan K. Kummerfeld 1
- show all...