2022
pdf
bib
abs
Investigating automatic and manual filtering methods to produce MT-ready glossaries from existing ones
Maria Afara
|
Randy Scansani
|
Loïc Dugast
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
Commercial Machine Translation (MT) providers offer functionalities that allow users to leverage bilingual glossaries. This poses the question of how to turn glossaries that were intended to be used by a human translator into MT-ready ones, removing entries that could harm the MT output. We present two automatic filtering approaches - one based on rules and the second one relying on a translation memory - and a manual filtering procedure carried out by a linguist. The resulting glossaries are added to the MT model. The outputs are compared against a baseline where no glossary is used and an output produced using the original glossary. The present work aims at investigating if any of these filtering methods can bring a higher terminology accuracy without negative effects on the overall quality. Results are measured with terminology accuracy and Translation Edit Rate. We test our filters on two language pairs, En-Fr and De-En. Results show that some of the automatically filtered glossaries improve the output when compared to the baseline, and they may help reach a better balance between accuracy and overall quality, replacing the costly manual process without quality loss.
2021
pdf
bib
abs
Glossary functionality in commercial machine translation: does it help? A first step to identify best practices for a language service provider
Randy Scansani
|
Loïc Dugast
Proceedings of Machine Translation Summit XVIII: Users and Providers Track
Recently, a number of commercial Machine Translation (MT) providers have started to offer glossary features allowing users to enforce terminology into the output of a generic model. However, to the best of our knowledge it is not clear how such features would impact terminology accuracy and the overall quality of the output. The present contribution aims at providing a first insight into the performance of the glossary-enhanced generic models offered by four providers. Our tests involve two different domains and language pairs, i.e. Sportswear En–Fr and Industrial Equipment De–En. The output of each generic model and of the glossaryenhanced one will be evaluated relying on Translation Error Rate (TER) to take into account the overall output quality and on accuracy to assess the compliance with the glossary. This is followed by a manual evaluation. The present contribution mainly focuses on understanding how these glossary features can be fruitfully exploited by language service providers (LSPs), especially in a scenario in which a customer glossary is already available and is added to the generic model as is.
2020
pdf
bib
abs
How do LSPs compute MT discounts? Presenting a company’s pipeline and its use
Randy Scansani
|
Lamis Mhedhbi
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation
In this paper we present a pipeline developed at Acolad to test a Machine Translation (MT) engine and compute the discount to be applied when its output is used in production. Our pipeline includes three main steps where quality and productivity are measured through automatic metrics, manual evaluation, and by keeping track of editing and temporal effort during a post-editing task. Thanks to this approach, it is possible to evaluate the output quality and compute an engine-specific discount. Our test pipeline tackles the complexity of transforming productivity measurements into discounts by comparing the outcome of each of the above-mentioned steps to an estimate of the average productivity of translation from scratch. The discount is obtained by subtracting the resulting coefficient from the per-word rate. After a description of the pipeline, the paper presents its application on four engines, discussing its results and showing that our method to estimate post-editing effort through manual evaluation seems to capture the actual productivity. The pipeline relies heavily on the work of professional post-editors, with the aim of creating a mutually beneficial cooperation between users and developers.
2019
pdf
bib
MAGMATic: A Multi-domain Academic Gold Standard with Manual Annotation of Terminology for Machine Translation Evaluation
Randy Scansani
|
Luisa Bentivogli
|
Silvia Bernardini
|
Adriano Ferraresi
Proceedings of Machine Translation Summit XVII: Research Track
pdf
bib
Do translator trainees trust machine translation? An experiment on post-editing and revision
Randy Scansani
|
Silvia Bernardini
|
Adriano Ferraresi
|
Luisa Bentivogli
Proceedings of Machine Translation Summit XVII: Translator, Project and User Tracks
2017
pdf
bib
abs
Enhancing Machine Translation of Academic Course Catalogues with Terminological Resources
Randy Scansani
|
Silvia Bernardini
|
Adriano Ferraresi
|
Federico Gaspari
|
Marcello Soffritti
Proceedings of the Workshop Human-Informed Translation and Interpreting Technology
This paper describes an approach to translating course unit descriptions from Italian and German into English, using a phrase-based machine translation (MT) system. The genre is very prominent among those requiring translation by universities in European countries in which English is a non-native language. For each language combination, an in-domain bilingual corpus including course unit and degree program descriptions is used to train an MT engine, whose output is then compared to a baseline engine trained on the Europarl corpus. In a subsequent experiment, a bilingual terminology database is added to the training sets in both engines and its impact on the output quality is evaluated based on BLEU and post-editing score. Results suggest that the use of domain-specific corpora boosts the engines quality for both language combinations, especially for German-English, whereas adding terminological resources does not seem to bring notable benefits.