MT
Workshop
on MT Evaluation
Proceedings
of the Workshop
21
September 2001,
(Organisers: Eduard Hovy, Margaret
King, Sandra Manzi, and
Contents
Programme of papers
- Session 1: The ISLE taxonomy for MT Evaluation and its use
Andrei Popescu-Belis, Sandra Manzi & Maghi King: Towards a two-stage taxonomy for machine translation evaluation [PDF, 187KB]
Elia Yuste-Rodrigo & Francine Braun-Chen: Comparative evaluation of the linguistic output of MT systems for translation and information purposes [PDF, 134KB]
Keith J. Miller, Donna M. Gates, Nancy Underwood & Josemina Magdalen: Evaluating machine translation output for an unknown source language: report of an ISLE-based investigation [PDF, 99KB]
Michelle Vanni & Keith J. Miller: Scaling the ISLE framework: validating tests of machine translation quality for multi-dimensional measurement [PDF, 121KB]
- Session 2: Correlations between evaluation measures
Martin Rajman & Tony Hartley: Automatically predicting MT systems rankings compatible with fluency, adequacy and informativeness scores [PDF, 154KB]
John White: Predicting intelligibility from fidelity in MT evaluation [PDF, 235KB]
- Session 3: Analytic measures of output quality, focusing on noun phrases
John White & Monika Forner: Predicting MT fidelity from noun-compound handling [PDF, 64KB]
Widad Mustafa El Hadi, Ismail Timimi & Marianne Dabbadie: Setting a methodology for machine translation evaluation [PDF, 99KB]
Florence Reeder, Keith Miller, Jennifer Doyon & John White: The naming of things and the confusion of tongues: an MT metric [PDF, 80KB]
- Session 4: MT Evaluation in relation to other domains
Christine Bruckner & Mirko Plitt: Evaluating the operational benefit of using machine translation output as translation memory input [PDF, 32KB]