Contemplating automatic MT evaluation

John S. White


Abstract
Researchers, developers, translators and information consumers all share the problem that there is no accepted standard for machine translation. The problem is much further confounded by the fact that MT evaluations properly done require a considerable commitment of time and resources, an anachronism in this day of cross-lingual information processing when new MT systems may developed in weeks instead of years. This paper surveys the needs addressed by several of the classic “types” of MT, and speculates on ways that each of these types might be automated to create relevant, near-instantaneous evaluation of approaches and systems.
Anthology ID:
2000.amta-papers.10
Volume:
Proceedings of the Fourth Conference of the Association for Machine Translation in the Americas: Technical Papers
Month:
October 10-14
Year:
2000
Address:
Cuernavaca, Mexico
Editor:
John S. White
Venue:
AMTA
SIG:
Publisher:
Springer
Note:
Pages:
100–108
Language:
URL:
https://link.springer.com/chapter/10.1007/3-540-39965-8_10
DOI:
Bibkey:
Cite (ACL):
John S. White. 2000. Contemplating automatic MT evaluation. In Proceedings of the Fourth Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 100–108, Cuernavaca, Mexico. Springer.
Cite (Informal):
Contemplating automatic MT evaluation (White, AMTA 2000)
Copy Citation:
PDF:
https://link.springer.com/chapter/10.1007/3-540-39965-8_10