A Reading Comprehension Corpus for Machine Translation Evaluation

Carolina Scarton, Lucia Specia


Abstract
Effectively assessing Natural Language Processing output tasks is a challenge for research in the area. In the case of Machine Translation (MT), automatic metrics are usually preferred over human evaluation, given time and budget constraints. However, traditional automatic metrics (such as BLEU) are not reliable for absolute quality assessment of documents, often producing similar scores for documents translated by the same MT system. For scenarios where absolute labels are necessary for building models, such as document-level Quality Estimation, these metrics can not be fully trusted. In this paper, we introduce a corpus of reading comprehension tests based on machine translated documents, where we evaluate documents based on answers to questions by fluent speakers of the target language. We describe the process of creating such a resource, the experiment design and agreement between the test takers. Finally, we discuss ways to convert the reading comprehension test into document-level quality scores.
Anthology ID:
L16-1579
Volume:
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Month:
May
Year:
2016
Address:
Portorož, Slovenia
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
3652–3658
Language:
URL:
https://aclanthology.org/L16-1579
DOI:
Bibkey:
Cite (ACL):
Carolina Scarton and Lucia Specia. 2016. A Reading Comprehension Corpus for Machine Translation Evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3652–3658, Portorož, Slovenia. European Language Resources Association (ELRA).
Cite (Informal):
A Reading Comprehension Corpus for Machine Translation Evaluation (Scarton & Specia, LREC 2016)
Copy Citation:
PDF:
https://aclanthology.org/L16-1579.pdf
Code
 carolscarton/CREG-MT-eval