Randy Scansani


pdf bib
Glossary functionality in commercial machine translation: does it help? A first step to identify best practices for a language service provider
Randy Scansani | Loïc Dugast
Proceedings of Machine Translation Summit XVIII: Users and Providers Track

Recently, a number of commercial Machine Translation (MT) providers have started to offer glossary features allowing users to enforce terminology into the output of a generic model. However, to the best of our knowledge it is not clear how such features would impact terminology accuracy and the overall quality of the output. The present contribution aims at providing a first insight into the performance of the glossary-enhanced generic models offered by four providers. Our tests involve two different domains and language pairs, i.e. Sportswear En–Fr and Industrial Equipment De–En. The output of each generic model and of the glossaryenhanced one will be evaluated relying on Translation Error Rate (TER) to take into account the overall output quality and on accuracy to assess the compliance with the glossary. This is followed by a manual evaluation. The present contribution mainly focuses on understanding how these glossary features can be fruitfully exploited by language service providers (LSPs), especially in a scenario in which a customer glossary is already available and is added to the generic model as is.


pdf bib
How do LSPs compute MT discounts? Presenting a company’s pipeline and its use
Randy Scansani | Lamis Mhedhbi
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation

In this paper we present a pipeline developed at Acolad to test a Machine Translation (MT) engine and compute the discount to be applied when its output is used in production. Our pipeline includes three main steps where quality and productivity are measured through automatic metrics, manual evaluation, and by keeping track of editing and temporal effort during a post-editing task. Thanks to this approach, it is possible to evaluate the output quality and compute an engine-specific discount. Our test pipeline tackles the complexity of transforming productivity measurements into discounts by comparing the outcome of each of the above-mentioned steps to an estimate of the average productivity of translation from scratch. The discount is obtained by subtracting the resulting coefficient from the per-word rate. After a description of the pipeline, the paper presents its application on four engines, discussing its results and showing that our method to estimate post-editing effort through manual evaluation seems to capture the actual productivity. The pipeline relies heavily on the work of professional post-editors, with the aim of creating a mutually beneficial cooperation between users and developers.


pdf bib
MAGMATic: A Multi-domain Academic Gold Standard with Manual Annotation of Terminology for Machine Translation Evaluation
Randy Scansani | Luisa Bentivogli | Silvia Bernardini | Adriano Ferraresi
Proceedings of Machine Translation Summit XVII: Research Track

pdf bib
Do translator trainees trust machine translation? An experiment on post-editing and revision
Randy Scansani | Silvia Bernardini | Adriano Ferraresi | Luisa Bentivogli
Proceedings of Machine Translation Summit XVII: Translator, Project and User Tracks


pdf bib
Enhancing Machine Translation of Academic Course Catalogues with Terminological Resources
Randy Scansani | Silvia Bernardini | Adriano Ferraresi | Federico Gaspari | Marcello Soffritti
Proceedings of the Workshop Human-Informed Translation and Interpreting Technology

This paper describes an approach to translating course unit descriptions from Italian and German into English, using a phrase-based machine translation (MT) system. The genre is very prominent among those requiring translation by universities in European countries in which English is a non-native language. For each language combination, an in-domain bilingual corpus including course unit and degree program descriptions is used to train an MT engine, whose output is then compared to a baseline engine trained on the Europarl corpus. In a subsequent experiment, a bilingual terminology database is added to the training sets in both engines and its impact on the output quality is evaluated based on BLEU and post-editing score. Results suggest that the use of domain-specific corpora boosts the engines quality for both language combinations, especially for German-English, whereas adding terminological resources does not seem to bring notable benefits.