Paralinguist Assessment Decision Factors For Machine Translation Output: A Case Study

Carol Van Ess-Dykema, Jocelyn Phillips, Florence Reeder, Laurie Gerber


Abstract
We describe a case study that presents a framework for examining whether Machine Translation (MT) output enables translation professionals to translate faster while at the same time producing better quality translations than without MT output. We seek to find decision factors that enable a translation professional, known as a Paralinguist, to determine whether MT output is of sufficient quality to serve as a “seed translation” for post-editors. The decision factors, unlike MT developers’ automatic metrics, must function without a reference translation. We also examine the correlation of MT developers’ automatic metrics with error annotators’ assessments of post-edited translations.
Anthology ID:
2010.amta-government.1
Volume:
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program
Month:
October 31-November 4
Year:
2010
Address:
Denver, Colorado, USA
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
Language:
URL:
https://aclanthology.org/2010.amta-government.1
DOI:
Bibkey:
Cite (ACL):
Carol Van Ess-Dykema, Jocelyn Phillips, Florence Reeder, and Laurie Gerber. 2010. Paralinguist Assessment Decision Factors For Machine Translation Output: A Case Study. In Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program, Denver, Colorado, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Paralinguist Assessment Decision Factors For Machine Translation Output: A Case Study (Van Ess-Dykema et al., AMTA 2010)
Copy Citation:
PDF:
https://aclanthology.org/2010.amta-government.1.pdf