Tong Pu
2022
FGraDA: A Dataset and Benchmark for Fine-Grained Domain Adaptation in Machine Translation
Wenhao Zhu
|
Shujian Huang
|
Tong Pu
|
Pingxuan Huang
|
Xu Zhang
|
Jian Yu
|
Wei Chen
|
Yanfeng Wang
|
Jiajun Chen
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Previous research for adapting a general neural machine translation (NMT) model into a specific domain usually neglects the diversity in translation within the same domain, which is a core problem for domain adaptation in real-world scenarios. One representative of such challenging scenarios is to deploy a translation system for a conference with a specific topic, e.g., global warming or coronavirus, where there are usually extremely less resources due to the limited schedule. To motivate wider investigation in such a scenario, we present a real-world fine-grained domain adaptation task in machine translation (FGraDA). The FGraDA dataset consists of Chinese-English translation task for four sub-domains of information technology: autonomous vehicles, AI education, real-time networks, and smart phone. Each sub-domain is equipped with a development set and test set for evaluation purposes. To be closer to reality, FGraDA does not employ any in-domain bilingual training data but provides bilingual dictionaries and wiki knowledge base, which can be easier obtained within a short time. We benchmark the fine-grained domain adaptation task and present in-depth analyses showing that there are still challenging problems to further improve the performance with heterogeneous resources.
Search
Co-authors
- Wenhao Zhu 1
- Shujian Huang 1
- Pingxuan Huang 1
- Xu Zhang 1
- Jian Yu 1
- show all...
Venues
- lrec1