R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason

Naoya Inoue, Pontus Stenetorp, Kentaro Inui


Abstract
Recent studies have revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets. This prevents the community from reliably measuring the progress of RC systems. To address this issue, we introduce R4C, a new task for evaluating RC systems’ internal reasoning. R4C requires giving not only answers but also derivations: explanations that justify predicted answers. We present a reliable, crowdsourced framework for scalably annotating RC datasets with derivations. We create and publicly release the R4C dataset, the first, quality-assured dataset consisting of 4.6k questions, each of which is annotated with 3 reference derivations (i.e. 13.8k derivations). Experiments show that our automatic evaluation metrics using multiple reference derivations are reliable, and that R4C assesses different skills from an existing benchmark.
Anthology ID:
2020.acl-main.602
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6740–6750
Language:
URL:
https://aclanthology.org/2020.acl-main.602
DOI:
10.18653/v1/2020.acl-main.602
Bibkey:
Cite (ACL):
Naoya Inoue, Pontus Stenetorp, and Kentaro Inui. 2020. R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6740–6750, Online. Association for Computational Linguistics.
Cite (Informal):
R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason (Inoue et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.602.pdf
Video:
 http://slideslive.com/38928927
Data
HotpotQA