Scaling the ISLE framework: validating tests of machine translation quality for multi-dimensional measurement

Michelle Vanni, Keith J. Miller


Abstract
Work on comparing a set of linguistic test scores for MT output to a set of the same tests’ scores for naturally-occurring target language text (Jones and Rusk 2000) broke new ground in automating MT Evaluation. However, the tests used were selected on an ad hoc basis. In this paper, we report on work to extend our understanding, through refinement and validation, of suitable linguistic tests in the context of our novel approach to MTE. This approach was introduced in Miller and Vanni (2001a) and employs standard, rather than randomly-chosen, tests of MT output quality selected from the ISLE framework as well as a scoring system for predicting the type of information processing task performable with the output. Since the intent is to automate the scoring system, this work can also be viewed as the preliminary steps of algorithm design.
Anthology ID:
2001.mtsummit-eval.9
Volume:
Workshop on MT Evaluation
Month:
September 18-22
Year:
2001
Address:
Santiago de Compostela, Spain
Editors:
Eduard Hovy, Margaret King, Sandra Manzi, Florence Reeder
Venue:
MTSummit
SIG:
Publisher:
Note:
Pages:
Language:
URL:
https://aclanthology.org/2001.mtsummit-eval.9
DOI:
Bibkey:
Cite (ACL):
Michelle Vanni and Keith J. Miller. 2001. Scaling the ISLE framework: validating tests of machine translation quality for multi-dimensional measurement. In Workshop on MT Evaluation, Santiago de Compostela, Spain.
Cite (Informal):
Scaling the ISLE framework: validating tests of machine translation quality for multi-dimensional measurement (Vanni & Miller, MTSummit 2001)
Copy Citation:
PDF:
https://aclanthology.org/2001.mtsummit-eval.9.pdf