Zhongxiang Yan
2021
The NiuTrans Machine Translation Systems for WMT21
Shuhan Zhou
|
Tao Zhou
|
Binghao Wei
|
Yingfeng Luo
|
Yongyu Mu
|
Zefan Zhou
|
Chenglong Wang
|
Xuanjun Zhou
|
Chuanhao Lv
|
Yi Jing
|
Laohu Wang
|
Jingnan Zhang
|
Canan Huang
|
Zhongxiang Yan
|
Chi Hu
|
Bei Li
|
Tong Xiao
|
Jingbo Zhu
Proceedings of the Sixth Conference on Machine Translation
This paper describes NiuTrans neural machine translation systems of the WMT 2021 news translation tasks. We made submissions to 9 language directions, including English2Chinese, Japanese, Russian, Icelandic and English2Hausa tasks. Our primary systems are built on several effective variants of Transformer, e.g., Transformer-DLCL, ODE-Transformer. We also utilize back-translation, knowledge distillation, post-ensemble, and iterative fine-tuning techniques to enhance the model performance further.
The NiuTrans System for the WMT 2021 Efficiency Task
Chenglong Wang
|
Chi Hu
|
Yongyu Mu
|
Zhongxiang Yan
|
Siming Wu
|
Yimin Hu
|
Hang Cao
|
Bei Li
|
Ye Lin
|
Tong Xiao
|
Jingbo Zhu
Proceedings of the Sixth Conference on Machine Translation
This paper describes the NiuTrans system for the WMT21 translation efficiency task. Following last year’s work, we explore various techniques to improve the efficiency while maintaining translation quality. We investigate the combinations of lightweight Transformer architectures and knowledge distillation strategies. Also, we improve the translation efficiency with graph optimization, low precision, dynamic batching, and parallel pre/post-processing. Putting these together, our system can translate 247,000 words per second on an NVIDIA A100, being 3× faster than our last year’s system. Our system is the fastest and has the lowest memory consumption on the GPU-throughput track. The code, model, and pipeline will be available at NiuTrans.NMT.