An End-to-End Multi-task Learning Model for Fact Checking

Sizhen Li, Shuai Zhao, Bo Cheng, Hao Yang


Abstract
With huge amount of information generated every day on the web, fact checking is an important and challenging task which can help people identify the authenticity of most claims as well as providing evidences selected from knowledge source like Wikipedia. Here we decompose this problem into two parts: an entity linking task (retrieving relative Wikipedia pages) and recognizing textual entailment between the claim and selected pages. In this paper, we present an end-to-end multi-task learning with bi-direction attention (EMBA) model to classify the claim as “supports”, “refutes” or “not enough info” with respect to the pages retrieved and detect sentences as evidence at the same time. We conduct experiments on the FEVER (Fact Extraction and VERification) paper test dataset and shared task test dataset, a new public dataset for verification against textual sources. Experimental results show that our method achieves comparable performance compared with the baseline system.
Anthology ID:
W18-5523
Volume:
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
Month:
November
Year:
2018
Address:
Brussels, Belgium
Venues:
EMNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
138–144
Language:
URL:
https://aclanthology.org/W18-5523
DOI:
10.18653/v1/W18-5523
Bibkey:
Cite (ACL):
Sizhen Li, Shuai Zhao, Bo Cheng, and Hao Yang. 2018. An End-to-End Multi-task Learning Model for Fact Checking. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 138–144, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
An End-to-End Multi-task Learning Model for Fact Checking (Li et al., 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5523.pdf
Data
FEVER