Harvesting and Refining Question-Answer Pairs for Unsupervised QA

Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, Ke Xu


Abstract
Question Answering (QA) has shown great success thanks to the availability of large-scale datasets and the effectiveness of neural models. Recent research works have attempted to extend these successes to the settings with few or no labeled data available. In this work, we introduce two approaches to improve unsupervised QA. First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA). Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA. We conduct experiments on SQuAD 1.1, and NewsQA by fine-tuning BERT without access to manually annotated data. Our approach outperforms previous unsupervised approaches by a large margin, and is competitive with early supervised models. We also show the effectiveness of our approach in the few-shot learning setting.
Anthology ID:
2020.acl-main.600
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6719–6728
Language:
URL:
https://aclanthology.org/2020.acl-main.600
DOI:
10.18653/v1/2020.acl-main.600
Bibkey:
Cite (ACL):
Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, and Ke Xu. 2020. Harvesting and Refining Question-Answer Pairs for Unsupervised QA. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6719–6728, Online. Association for Computational Linguistics.
Cite (Informal):
Harvesting and Refining Question-Answer Pairs for Unsupervised QA (Li et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.600.pdf
Video:
 http://slideslive.com/38928860
Code
 Neutralzz/RefQA
Data
NewsQASQuAD