This paper describes a Machine Translation (MT) evaluation experiment where emphasis is placed on the quality of output and the extent to which it is geared to different users' needs. Adopting a very specific scenario, that of a multilingual international organisation, a clear distinction is made between two user classes: translators and administrators. Whereas the first group requires MT output to be accurate and of good post-editable quality in order to produce a polished translation, the second group primarily needs informative data for carrying out other, non-linguistic tasks, and therefore uses MT more as an information-gathering and gisting tool. During the experiment, MT output of three different systems is compared in order to establish which MT system best serves the organisation's multilingual communication and information needs. This is a comparative usability- and adequacy-oriented evaluation in that it attempts to help such organisations decide which system produces the most adequate output for certain well-defined user types. To perform the experiment, criteria relating to both users and MT output are examined with reference to the ISLE taxonomy. The experiment comprises two evaluation phases, the first at sentence level, the second at overall text level. In both phases, evaluators make use of a 1-5 rating scale. Weighted results provide some insight into the systems' usability and adequacy for the purposes described above. As a conclusion, it is suggested that further research should be devoted to the most critical aspect of this exercise, namely defining meaningful and useful criteria for evaluating the post-editability and informativeness of MT output.
This paper tackles the issue of how to teach Machine Translation (MT) to future translators enrolled in a university translation-training course. Teaching MT to trainee translators usually entails two main difficulties: first, a misunderstanding of what MT is really useful for, which normally leads to the misconception that MT output’s quality always equals zero; second, a widespread fear that machines are to replace human translators, consequently leaving them out of work. In order to fight these generalised prejudices on MT among (future) translators, translation instruction should be primarily practical and realistic, as well as learner-centred. It thus ought to highlight the fact that: 1) MT systems and applications are essential components of today’s global multilingual documentation production; 2) the way in which MT is employed in large multilingual organisations and international companies opens up new work avenues for translators. This will be illustrated by two activities, one using commercial MT systems for quick translations, whose process outcome is improved through the trainees’ interaction with the system; the other focusing on MT output comprehensibility by speakers of target language only. MT is thus a mainstream component of a translation-training framework delineated in Yuste (2000) that, by placing the trainee in workplace-like situations, also echoes Kiraly (1999).