A Framework for Evaluation of Machine Reading Comprehension Gold Standards

Viktor Schlegel, Marco Valentino, Andre Freitas, Goran Nenadic, Riza Batista-Navarro


Abstract
Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text. While neural MRC systems gain popularity and achieve noticeable performance, issues are being raised with the methodology used to establish their performance, particularly concerning the data design of gold standards that are used to evaluate them. There is but a limited understanding of the challenges present in this data, which makes it hard to draw comparisons and formulate reliable hypotheses. As a first step towards alleviating the problem, this paper proposes a unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand. We propose a qualitative annotation schema for the first and a set of approximative metrics for the latter. In a first application of the framework, we analyse modern MRC gold standards and present our findings: the absence of features that contribute towards lexical ambiguity, the varying factual correctness of the expected answers and the presence of lexical cues, all of which potentially lower the reading comprehension complexity and quality of the evaluation data.
Anthology ID:
2020.lrec-1.660
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5359–5369
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.660
DOI:
Bibkey:
Cite (ACL):
Viktor Schlegel, Marco Valentino, Andre Freitas, Goran Nenadic, and Riza Batista-Navarro. 2020. A Framework for Evaluation of Machine Reading Comprehension Gold Standards. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5359–5369, Marseille, France. European Language Resources Association.
Cite (Informal):
A Framework for Evaluation of Machine Reading Comprehension Gold Standards (Schlegel et al., LREC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lrec-1.660.pdf
Code
 schlevik/dataset-analysis
Data
DROPReCoRD