Unsupervised Question Answering for Fact-Checking

Mayank Jobanputra


Abstract
Recent Deep Learning (DL) models have succeeded in achieving human-level accuracy on various natural language tasks such as question-answering, natural language inference (NLI), and textual entailment. These tasks not only require the contextual knowledge but also the reasoning abilities to be solved efficiently. In this paper, we propose an unsupervised question-answering based approach for a similar task, fact-checking. We transform the FEVER dataset into a Cloze-task by masking named entities provided in the claims. To predict the answer token, we utilize pre-trained Bidirectional Encoder Representations from Transformers (BERT). The classifier computes label based on the correctly answered questions and a threshold. Currently, the classifier is able to classify the claims as “SUPPORTS” and “MANUAL_REVIEW”. This approach achieves a label accuracy of 80.2% on the development set and 80.25% on the test set of the transformed dataset.
Anthology ID:
D19-6609
Volume:
Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, Arpit Mittal
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
52–56
Language:
URL:
https://aclanthology.org/D19-6609
DOI:
10.18653/v1/D19-6609
Bibkey:
Cite (ACL):
Mayank Jobanputra. 2019. Unsupervised Question Answering for Fact-Checking. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 52–56, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Question Answering for Fact-Checking (Jobanputra, 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-6609.pdf