Paula Estrella


2020

pdf bib
Re-design of the Machine Translation Training Tool (MT3)
Paula Estrella | Emiliano Cuenca | Laura Bruno | Jonathan Mutal | Sabrina Girletti | Lise Volkart | Pierrette Bouillon
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation

We believe that machine translation (MT) must be introduced to translation students as part of their training, in preparation for their professional life. In this paper we present a new version of the tool called MT3, which builds on and extends a joint effort undertaken by the Faculty of Languages of the University of Córdoba and Faculty of Translation and Interpreting of the University of Geneva to develop an open-source web platform to teach MT to translation students. We also report on a pilot experiment with the goal of testing the viability of using MT3 in an MT course. The pilot let us identify areas for improvement and collect students’ feedback about the tool’s usability.

2019

pdf bib
Monolingual backtranslation in a medical speech translation system for diagnostic interviews - a NMT approach
Jonathan Mutal | Pierrette Bouillon | Johanna Gerlach | Paula Estrella | Hervé Spechbach
Proceedings of Machine Translation Summit XVII: Translator, Project and User Tracks

pdf bib
Differences between SMT and NMT Output - a Translators’ Point of View
Jonathan Mutal | Lise Volkart | Pierrette Bouillon | Sabrina Girletti | Paula Estrella
Proceedings of the Human-Informed Translation and Interpreting Technology Workshop (HiT-IT 2019)

In this study, we compare the output quality of two MT systems, a statistical (SMT) and a neural (NMT) engine, customised for Swiss Post’s Language Service using the same training data. We focus on the point of view of professional translators and investigate how they perceive the differences between the MT output and a human reference (namely deletions, substitutions, insertions and word order). Our findings show that translators more frequently consider these differences to be errors in SMT than NMT, and that deletions are the most serious errors in both architectures. We also observe lower agreement on differences to be corrected in NMT than in SMT, suggesting that errors are easier to identify in SMT. These findings confirm the ability of NMT to produce correct paraphrases, which could also explain why BLEU is often considered as an inadequate metric to evaluate the performance of NMT systems.

2018

pdf bib
Integrating MT at Swiss Post’s Language Service: preliminary results
Pierrette Bouillon | Sabrina Girletti | Paula Estrella | Jonathan Mutal | Martina Bellodi | Beatrice Bircher
Proceedings of the 21st Annual Conference of the European Association for Machine Translation

This paper presents the preliminary results of an ongoing academia-industry collaboration that aims to integrate MT into the workflow of Swiss Post’s Language Service. We describe the evaluations carried out to select an MT tool (commercial or open-source) and assess the suitability of machine translation for post-editing in Swiss Post’s various subject areas and language pairs. The goal of this first phase is to provide recommendations with regard to the tool, language pair and most suitable domain for implementing MT.

pdf bib
Pre-professional pre-conceptions
Laura Bruno | Antonio Miloro | Paula Estrella | Mariona Sabaté Carrove
Proceedings of the 21st Annual Conference of the European Association for Machine Translation

While MT+PE has become an industry standard, our translation schools are not able to accompany these changes by updating their academic programs. We polled 100 pre-professionals to confirm that in our local context they are reluctant to accept post-editing jobs mainly because they have inherited pre-conceptions or negative opinions about MT during their studies.

2016

pdf bib
On the Robustness of Standalone Referring Expression Generation Algorithms Using RDF Data
Pablo Duboue | Martin Ariel Domínguez | Paula Estrella
Proceedings of the 2nd International Workshop on Natural Language Generation and the Semantic Web (WebNLG 2016)

2012

pdf bib
Semantic Textual Similarity for MT evaluation
Julio Castillo | Paula Estrella
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf bib
SAGAN: An approach to Semantic Textual Similarity based on Textual Entailment
Julio Castillo | Paula Estrella
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

2011

pdf bib
Bootstrapping a statistical speech translator from a rule-based one
Manny Rayner | Paula Estrella | Pierrette Bouillon
Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation

2010

pdf bib
Dialogue Systems for Virtual Environments
Luciana Benotti | Paula Estrella | Carlos Areces
Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas

pdf bib
A Bootstrapped Interlingua-Based SMT Architecture
Manny Rayner | Paula Estrella | Pierrette Bouillon
Proceedings of the 14th Annual Conference of the European Association for Machine Translation

2009

pdf bib
Using Artificially Generated Data to Evaluate Statistical Machine Translation
Manny Rayner | Paula Estrella | Pierrette Bouillon | Beth Ann Hockey | Yukie Nakao
Proceedings of the 2009 Workshop on Grammar Engineering Across Frameworks (GEAF 2009)

pdf bib
Using Artificial Data to Compare the Difficulty of Using Statistical Machine Translation in Different Language-Pairs
Manny Rayner | Paula Estrella | Pierrette Bouillon | Yukie Nakao
Proceedings of Machine Translation Summit XII: Posters

pdf bib
Relating recognition, translation and usability of two different versions of MedSLT
Marianne Starlander | Paula Estrella
Proceedings of Machine Translation Summit XII: Posters

2008

pdf bib
Improving Contextual Quality Models for MT Evaluation Based on Evaluators’ Feedback
Paula Estrella | Andrei Popescu-Belis | Maghi King
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The Framework for the Evaluation for Machine Translation (FEMTI) contains guidelines for building a quality model that is used to evaluate MT systems in relation to the purpose and intended context of use of the systems. Contextual quality models can thus be constructed, but entering into FEMTI the knowledge required for this operation is a complex task. An experiment has been set up in order to transfer knowledge from MT evaluation experts into the FEMTI guidelines, by polling experts about the evaluation methods they would use in a particular context, then inferring from the results generic relations between characteristics of the context of use and quality characteristics. The results of this hands-on exercise, carried out as part of a conference tutorial, have served to refine FEMTI’s “generic contextual quality model” and to obtain feedback on the FEMTI guidelines in general.

2007

pdf bib
Generating Usable Formats for Metadata and Annotations in a Large Meeting Corpus
Andrei Popescu-Belis | Paula Estrella
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions

pdf bib
How much data is needed for reliable MT evaluation? Using bootstrapping to study human and automatic metrics
Paula Estrella | Olivier Hamon | Andrei Popescu-Belis
Proceedings of Machine Translation Summit XI: Papers

pdf bib
Context-based evaluation of MT systems: principles and tools
Maghi King | Andrei Popescu-Belis | Paula Estrella
Proceedings of Machine Translation Summit XI: Tutorials

pdf bib
A new method for the study of correlations between MT evaluation metrics
Paula Estrella | Andrei Popescu-Belis | Maghi King
Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages: Papers

2006

pdf bib
A Model for Context-Based Evaluation of Language Processing Systems and its Application to Machine Translation Evaluation
Andrei Popescu-Belis | Paula Estrella | Margaret King | Nancy Underwood
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper, we propose a formal framework that takes into account the influence of the intended context of use of an NLP system on the procedure and the metrics used to evaluate the system. We introduce in particular the notion of a context-dependent quality model and explain how it can be adapted to a given context of use. More specifically, we define vector-space representations of contexts of use and of quality models, which are connected by a generic contextual quality model (GCQM). For each domain, experts in evaluation are needed to build a GCQM based on analytic knowledge and on previous evaluations, using the mechanism proposed here. The main inspiration source for this work is the FEMTI framework for the evaluation of machine translation, which implements partly the present model, and which is described briefly along with insights from other domains.

2005

pdf bib
Finding the System that Suits You Best: Towards the Normalization of MT Evaluation
Paula Estrella | Andrei Popescu-Belis | Nancy Underwood
Translating and the Computer 27