Cut to the Chase: A Context Zoom-in Network for Reading Comprehension

Sathish Reddy Indurthi, Seunghak Yu, Seohyun Back, Heriberto Cuayáhuitl


Abstract
In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ‘NarrativeQA’. The proposed architecture outperforms state-of-the-art results by 12.62% (ROUGE-L) relative improvement.
Anthology ID:
D18-1054
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
570–575
Language:
URL:
https://aclanthology.org/D18-1054
DOI:
10.18653/v1/D18-1054
Bibkey:
Cite (ACL):
Sathish Reddy Indurthi, Seunghak Yu, Seohyun Back, and Heriberto Cuayáhuitl. 2018. Cut to the Chase: A Context Zoom-in Network for Reading Comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 570–575, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Cut to the Chase: A Context Zoom-in Network for Reading Comprehension (Indurthi et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1054.pdf
Attachment:
 D18-1054.Attachment.zip
Video:
 https://aclanthology.org/D18-1054.mp4
Data
NarrativeQASQuAD