Szymon Klocek


2020

pdf bib
eTranslation’s Submissions to the WMT 2020 News Translation Task
Csaba Oravecz | Katina Bontcheva | László Tihanyi | David Kolovratnik | Bhavani Bhaskar | Adrien Lardilleux | Szymon Klocek | Andreas Eisele
Proceedings of the Fifth Conference on Machine Translation

The paper describes the submissions of the eTranslation team to the WMT 2020 news translation shared task. Leveraging the experience from the team’s participation last year we developed systems for 5 language pairs with various strategies. Compared to last year, for some language pairs we dedicated a lot more resources to training, and tried to follow standard best practices to build competitive systems which can achieve good results in the rankings. By using deep and complex architectures we sacrificed direct re-usability of our systems in production environments but evaluation showed that this approach could result in better models that significantly outperform baseline architectures. We submitted two systems to the zero shot robustness task. These submissions are described briefly in this paper as well.

2012

pdf bib
DGT-TM: A freely available Translation Memory in 22 languages
Ralf Steinberger | Andreas Eisele | Szymon Klocek | Spyridon Pilos | Patrick Schlüter
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The European Commission's (EC) Directorate General for Translation, together with the EC's Joint Research Centre, is making available a large translation memory (TM; i.e. sentences and their professionally produced translations) covering twenty-two official European Union (EU) languages and their 231 language pairs. Such a resource is typically used by translation professionals in combination with TM software to improve speed and consistency of their translations. However, this resource has also many uses for translation studies and for language technology applications, including Statistical Machine Translation (SMT), terminology extraction, Named Entity Recognition (NER), multilingual classification and clustering, and many more. In this reference paper for DGT-TM, we introduce this new resource, provide statistics regarding its size, and explain how it was produced and how to use it.