Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests

Takuya Matsuzaki, Akira Fujita, Naoya Todo, Noriko H. Arai


Abstract
This paper reports on an experiment where 795 human participants answered to the questions taken from second language proficiency tests that were translated to their native language. The output of three machine translation systems and two different human translations were used as the test material. We classified the translation errors in the questions according to an error taxonomy and analyzed the participants’ response on the basis of the type and frequency of the translation errors. Through the analysis, we identified several types of errors that deteriorated most the accuracy of the participants’ answers, their confidence on the answers, and their overall evaluation of the translation quality.
Anthology ID:
L16-1440
Volume:
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Month:
May
Year:
2016
Address:
Portorož, Slovenia
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
2771–2776
Language:
URL:
https://aclanthology.org/L16-1440
DOI:
Bibkey:
Cite (ACL):
Takuya Matsuzaki, Akira Fujita, Naoya Todo, and Noriko H. Arai. 2016. Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2771–2776, Portorož, Slovenia. European Language Resources Association (ELRA).
Cite (Informal):
Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests (Matsuzaki et al., LREC 2016)
Copy Citation:
PDF:
https://aclanthology.org/L16-1440.pdf