Yoko Matsuzaki
2023
KYB General Machine Translation Systems for WMT23
Ben Li
|
Yoko Matsuzaki
|
Shivam Kalkar
Proceedings of the Eighth Conference on Machine Translation
This paper describes our approach to constructing a neural machine translation system for the WMT 2023 general machine translation shared task. Our model is based on the Transformer architecture’s base settings. We optimize system performance through various strategies. Enhancing our model’s capabilities involves fine-tuning the pretrained model with an extended dataset. To further elevate translation quality, specialized pre- and post-processing techniques are deployed. Our central focus is on efficient model training, aiming for exceptional accuracy through the synergy of a compact model and curated data. We also performed ensembling augmented by N-best ranking, for both directions of English to Japanese and Japanese to English translation.
2022
KYB General Machine Translation Systems for WMT22
Shivam Kalkar
|
Yoko Matsuzaki
|
Ben Li
Proceedings of the Seventh Conference on Machine Translation (WMT)
We here describe our neural machine translation system for general machine translation shared task in WMT 2022. Our systems are based on the Transformer (Vaswani et al., 2017) with base settings. We explore the high-efficiency model training strategies, aimed to train a model with high-accuracy by using small model and a reasonable amount of data. We performed fine-tuning and ensembling with N-best ranking in English to/from Japanese directions. We found that fine-tuning by filtered JParaCrawl data set leads to better translations for both of direction in English to/from Japanese models. In English to Japanese direction model, ensembling and N-best ranking of 10 different checkpoints improved translations. By comparing with other online translation service, we found that our model achieved a great translation quality.