Answerable or Not: Devising a Dataset for Extending Machine Reading Comprehension

Mao Nakanishi, Tetsunori Kobayashi, Yoshihiko Hayashi


Abstract
Machine-reading comprehension (MRC) has recently attracted attention in the fields of natural language processing and machine learning. One of the problematic presumptions with current MRC technologies is that each question is assumed to be answerable by looking at a given text passage. However, to realize human-like language comprehension ability, a machine should also be able to distinguish not-answerable questions (NAQs) from answerable questions. To develop this functionality, a dataset incorporating hard-to-detect NAQs is vital; however, its manual construction would be expensive. This paper proposes a dataset creation method that alters an existing MRC dataset, the Stanford Question Answering Dataset, and describes the resulting dataset. The value of this dataset is likely to increase if each NAQ in the dataset is properly classified with the difficulty of identifying it as an NAQ. This difficulty level would allow researchers to evaluate a machine’s NAQ detection performance more precisely. Therefore, we propose a method for automatically assigning difficulty level labels, which measures the similarity between a question and the target text passage. Our NAQ detection experiments demonstrate that the resulting dataset, having difficulty level annotations, is valid and potentially useful in the development of advanced MRC models.
Anthology ID:
C18-1083
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
973–983
Language:
URL:
https://aclanthology.org/C18-1083
DOI:
Bibkey:
Cite (ACL):
Mao Nakanishi, Tetsunori Kobayashi, and Yoshihiko Hayashi. 2018. Answerable or Not: Devising a Dataset for Extending Machine Reading Comprehension. In Proceedings of the 27th International Conference on Computational Linguistics, pages 973–983, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Answerable or Not: Devising a Dataset for Extending Machine Reading Comprehension (Nakanishi et al., COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1083.pdf
Data
CBTWikiQA