Making Neural QA as Simple as Possible but not Simpler

Dirk Weissenborn, Georg Wiese, Laura Seiffe


Abstract
Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective.
Anthology ID:
K17-1028
Volume:
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Roger Levy, Lucia Specia
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
271–280
Language:
URL:
https://aclanthology.org/K17-1028
DOI:
10.18653/v1/K17-1028
Bibkey:
Cite (ACL):
Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making Neural QA as Simple as Possible but not Simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 271–280, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Making Neural QA as Simple as Possible but not Simpler (Weissenborn et al., CoNLL 2017)
Copy Citation:
PDF:
https://aclanthology.org/K17-1028.pdf
Code
 additional community code
Data
NewsQASQuAD