Sharon O’Brien


2024

This paper presents a user study with 11 professional English-Spanish translators in the legal domain. We analysed whether negative or positive translators’ pre-task perceptions of machine translation (MT) being an aid or a threat had any relationship with final translation quality and productivity in a post-editing workflow. Pre-task perceptions of MT were collected in a questionnaire before translators conducted post-editing tasks and were then correlated with translation productivity and translation quality after an Adequacy-Fluency evaluation. Each participant translated 13 texts over two consecutive weeks, accounting for 120,102 words in total. Results show that translators who had higher levels of trust in MT and thought that MT was not a threat to the translation profession reported higher translation quality and productivity. These results have critical implications: improving translator-computer interactions and fostering MT literacy in translation training may be crucial to reducing negative translators’ pre-task perceptions, resulting in better translation productivity and quality, especially adequacy.

2023

Perceptions and experiences of machine translation (MT) users before, during, and after their interaction with MT systems, products or services has been overlooked both in academia and in industry. Tradi-tionally, the focus has been on productivi-ty and quality, often neglecting the human factor. We propose the concept of Ma-chine Translation User Experience (MTUX) for assessing, evaluating, and getting further information about the user experiences of people interacting with MT. By conducting a human-computer in-teraction (HCI)-based study with 15 pro-fessional translators, we analyse which is the best method for measuring MTUX, and conclude by suggesting the use of the User Experience Questionnaire (UEQ). The measurement of MTUX will help eve-ry stakeholder in the MT industry - devel-opers will be able to identify pain points for the users and solve them in the devel-opment process, resulting in better MTUX and higher adoption of MT systems or products by MT users.

2020

We conducted a survey to understand the impact of machine translation and post-editing awareness on comprehension of and trust in messages disseminated to prepare the public for a weather-related crisis, i.e. flooding. The translation direction was English–Italian. Sixty-one participants—all native Italian speakers with different English proficiency levels—answered our survey. Each participant read and evaluated between three and six crisis messages using ratings and open-ended questions on comprehensibility and trust. The messages were in English and Italian. All the Italian messages had been machine translated and post-edited. Nevertheless, participants were told that only half had been post-edited, so that we could test the impact of post-editing awareness. We could not draw firm conclusions when comparing the scores for trust and comprehensibility assigned to the three types of messages—English, post-edits, and purported raw outputs. However, when scores were triangulated with open-ended answers, stronger patterns were observed, such as the impact of fluency of the translations on their comprehensibility and trustworthiness. We found correlations between comprehensibility and trustworthiness, and identified other factors influencing these aspects, such as the clarity and soundness of the messages. We conclude by outlining implications for crisis preparedness, limitations, and areas for future research.

2019

2017

2016

This paper discusses a methodology to measure the usability of machine translated content by end users, comparing lightly post-edited content with raw output and with the usability of source language content. The content selected consists of Online Help articles from a software company for a spreadsheet application, translated from English into German. Three groups of five users each used either the source text - the English version (EN) -, the raw MT version (DE_MT), or the light PE version (DE_PE), and were asked to carry out six tasks. Usability was measured using an eye tracker and cognitive, temporal and pragmatic measures of usability. Satisfaction was measured via a post-task questionnaire presented after the participants had completed the tasks.

2015

2014

We present Kanjingo, a mobile app for post-editing currently running under iOS. The App was developed using an agile methodoly at CNGL, DCU. Though it could be used for numerous scenarios, our test scenario involved the post-editing of machine translated sample content for the non-profit translation organization Translators without Borders. Feedback from a first round of user testing for English-French and English-Spanish was positive, but users also identified a number of usability issues that required improvement. These issues were addressed in a second development round and a second usability evaluation was carried out in collaboration with another non-profit translation organization, The Rosetta Foundation, again with French and Spanish as target languages.

2013

2012

This paper reports on a project whose aims are to investigate the usability of raw machine translated technical support documentation for a commercial online file storage service. Following the ISO/TR 16982 definition of usability - goal completion, satisfaction, effectiveness, and efficiency - comparisons are drawn for all measures between the original user documentation written in English for a well-known online file storage service and raw machine translated output in four target languages: Spanish, French, German and Japanese. Using native speakers for each language, we found significant differences between the source and MT output for three out of the four measures: goal completion, efficiency and user satisfaction. This leads to a tentative conclusion that there is a difference in usability between well-formed content and raw machine translated content, and we suggest avenues for further work.

2010

2009

2007

2004

2003

2002

1999