Fangxu Liu
2021
Tencent Translation System for the WMT21 News Translation Task
Longyue Wang
|
Mu Li
|
Fangxu Liu
|
Shuming Shi
|
Zhaopeng Tu
|
Xing Wang
|
Shuangzhi Wu
|
Jiali Zeng
|
Wen Zhang
Proceedings of the Sixth Conference on Machine Translation
This paper describes Tencent Translation systems for the WMT21 shared task. We participate in the news translation task on three language pairs: Chinese-English, English-Chinese and German-English. Our systems are built on various Transformer models with novel techniques adapted from our recent research work. First, we combine different data augmentation methods including back-translation, forward-translation and right-to-left training to enlarge the training data. We also apply language coverage bias, data rejuvenation and uncertainty-based sampling approaches to select content-relevant and high-quality data from large parallel and monolingual corpora. Expect for in-domain fine-tuning, we also propose a fine-grained “one model one domain” approach to model characteristics of different news genres at fine-tuning and decoding stages. Besides, we use greed-based ensemble algorithm and transductive ensemble method to further boost our systems. Based on our success in the last WMT, we continuously employed advanced techniques such as large batch training, data selection and data filtering. Finally, our constrained Chinese-English system achieves 33.4 case-sensitive BLEU score, which is the highest among all submissions. The German-English system is ranked at second place accordingly.
2020
Tencent Neural Machine Translation Systems for the WMT20 News Translation Task
Shuangzhi Wu
|
Xing Wang
|
Longyue Wang
|
Fangxu Liu
|
Jun Xie
|
Zhaopeng Tu
|
Shuming Shi
|
Mu Li
Proceedings of the Fifth Conference on Machine Translation
This paper describes Tencent Neural Machine Translation systems for the WMT 2020 news translation tasks. We participate in the shared news translation task on English ↔ Chinese and English → German language pairs. Our systems are built on deep Transformer and several data augmentation methods. We propose a boosted in-domain finetuning method to improve single models. Ensemble is used to combine single models and we propose an iterative transductive ensemble method which can further improve the translation performance based on the ensemble results. We achieve a BLEU score of 36.8 and the highest chrF score of 0.648 on Chinese → English task.
Search
Co-authors
- Shuangzhi Wu 2
- Xing Wang 2
- Longyue Wang 2
- Zhaopeng Tu 2
- Shuming Shi 2
- show all...
Venues
- wmt2