Vladislav Kovalenko


2023

pdf bib
PROMT Systems for WMT23 Shared General Translation Task
Alexander Molchanov | Vladislav Kovalenko
Proceedings of the Eighth Conference on Machine Translation

This paper describes the PROMT submissions for the WMT23 Shared General Translation Task. This year we participated in two directions of the Shared Translation Task: English to Russian and Russian to English. Our models are trained with the MarianNMT toolkit using the transformer-big configuration. We use BPE for text encoding, both models are unconstrained. We achieve competitive results according to automatic metrics in both directions.

2022

pdf bib
PROMT Systems for WMT22 General Translation Task
Alexander Molchanov | Vladislav Kovalenko | Natalia Makhamalkina
Proceedings of the Seventh Conference on Machine Translation (WMT)

The PROMT systems are trained with the MarianNMT toolkit. All systems use the transformer-big configuration. We use BPE for text encoding, the vocabulary sizes vary from 24k to 32k for different language pairs. All systems are unconstrained. We use all data provided by the WMT organizers, all publicly available data and some private data. We participate in four directions: English-Russian, English-German and German-English, Ukrainian-English.

2021

pdf bib
PROMT Systems for WMT21 Terminology Translation Task
Alexander Molchanov | Vladislav Kovalenko | Fedor Bykov
Proceedings of the Sixth Conference on Machine Translation

This paper describes the PROMT submissions for the WMT21 Terminology Translation Task. We participate in two directions: English to French and English to Russian. Our final submissions are MarianNMT-based neural systems. We present two technologies for terminology translation: a modification of the Dinu et al. (2019) soft-constrained approach and our own approach called PROMT Smart Neural Dictionary (SmartND). We achieve good results in both directions.