Elaine O’Curran


2017

pdf bib
Comparative Evaluation of NMT with Established SMT Programs
Lena Marg | Naoko Miyazaki | Elaine O’Curran | Tanja Schmidt
Proceedings of Machine Translation Summit XVI: Commercial MT Users and Translators Track

2015

pdf bib
MT quality evaluations: from test environment to production
Elaine O’Curran
Proceedings of Machine Translation Summit XV: User Track

2014

bib
Machine translation and post-editing for user generated content: an LSP perspective
Elaine O’Curran
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Users Track

pdf bib
Translation quality in post-edited versus human-translated segments: a case study
Elaine O’Curran
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas

We analyze the linguistic quality results for a post-editing productivity test that contains a 3:1 ratio of post-edited segments versus human-translated segments, in order to assess if there is a difference in the final translation quality of each segment type and also to investigate the type of errors that are found in each segment type. Overall, we find that the human-translated segments contain more errors per word than the post-edited segments and although the error categories logged are similar across the two segment types, the most notable difference is that the number of stylistic errors in the human translations is 3 times higher than in the post-edited translations.