A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking

Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, Iryna Gurevych


Abstract
Automated fact-checking based on machine learning is a promising approach to identify false information distributed on the web. In order to achieve satisfactory performance, machine learning methods require a large corpus with reliable annotations for the different tasks in the fact-checking process. Having analyzed existing fact-checking corpora, we found that none of them meets these criteria in full. They are either too small in size, do not provide detailed annotations, or are limited to a single domain. Motivated by this gap, we present a new substantially sized mixed-domain corpus with annotations of good quality for the core fact-checking tasks: document retrieval, evidence extraction, stance detection, and claim validation. To aid future corpus construction, we describe our methodology for corpus creation and annotation, and demonstrate that it results in substantial inter-annotator agreement. As baselines for future research, we perform experiments on our corpus with a number of model architectures that reach high performance in similar problem settings. Finally, to support the development of future models, we provide a detailed error analysis for each of the tasks. Our results show that the realistic, multi-domain setting defined by our data poses new challenges for the existing models, providing opportunities for considerable improvement by future systems.
Anthology ID:
K19-1046
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
493–503
Language:
URL:
https://aclanthology.org/K19-1046
DOI:
10.18653/v1/K19-1046
Bibkey:
Cite (ACL):
Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, and Iryna Gurevych. 2019. A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 493–503, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking (Hanselowski et al., CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1046.pdf
Attachment:
 K19-1046.Attachment.pdf
Supplementary material:
 K19-1046.Supplementary_Material.pdf
Code
 UKPLab/conll2019-snopes-experiments +  additional community code
Data
FEVER