Are Red Roses Red? Evaluating Consistency of Question-Answering Models

Marco Tulio Ribeiro, Carlos Guestrin, Sameer Singh


Abstract
Although current evaluation of question-answering systems treats predictions in isolation, we need to consider the relationship between predictions to measure true understanding. A model should be penalized for answering “no” to “Is the rose red?” if it answers “red” to “What color is the rose?”. We propose a method to automatically extract such implications for instances from two QA datasets, VQA and SQuAD, which we then use to evaluate the consistency of models. Human evaluation shows these generated implications are well formed and valid. Consistency evaluation provides crucial insights into gaps in existing models, while retraining with implication-augmented data improves consistency on both synthetic and human-generated implications.
Anthology ID:
P19-1621
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6174–6184
Language:
URL:
https://aclanthology.org/P19-1621
DOI:
10.18653/v1/P19-1621
Bibkey:
Cite (ACL):
Marco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are Red Roses Red? Evaluating Consistency of Question-Answering Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6174–6184, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Are Red Roses Red? Evaluating Consistency of Question-Answering Models (Ribeiro et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1621.pdf
Code
 marcotcr/qa_consistency