Comparative Analysis of Neural QA models on SQuAD

Soumya Wadhwa, Khyathi Chandu, Eric Nyberg


Abstract
The task of Question Answering has gained prominence in the past few decades for testing the ability of machines to understand natural language. Large datasets for Machine Reading have led to the development of neural models that cater to deeper language understanding compared to information retrieval tasks. Different components in these neural architectures are intended to tackle different challenges. As a first step towards achieving generalization across multiple domains, we attempt to understand and compare the peculiarities of existing end-to-end neural models on the Stanford Question Answering Dataset (SQuAD) by performing quantitative as well as qualitative analysis of the results attained by each of them. We observed that prediction errors reflect certain model-specific biases, which we further discuss in this paper.
Anthology ID:
W18-2610
Volume:
Proceedings of the Workshop on Machine Reading for Question Answering
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Eunsol Choi, Minjoon Seo, Danqi Chen, Robin Jia, Jonathan Berant
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
89–97
Language:
URL:
https://aclanthology.org/W18-2610
DOI:
10.18653/v1/W18-2610
Bibkey:
Cite (ACL):
Soumya Wadhwa, Khyathi Chandu, and Eric Nyberg. 2018. Comparative Analysis of Neural QA models on SQuAD. In Proceedings of the Workshop on Machine Reading for Question Answering, pages 89–97, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Comparative Analysis of Neural QA models on SQuAD (Wadhwa et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-2610.pdf
Data
SQuADTriviaQA