%0 Conference Proceedings %T A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking %A Hanselowski, Andreas %A Stab, Christian %A Schulz, Claudia %A Li, Zile %A Gurevych, Iryna %Y Bansal, Mohit %Y Villavicencio, Aline %S Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) %D 2019 %8 November %I Association for Computational Linguistics %C Hong Kong, China %F hanselowski-etal-2019-richly %X Automated fact-checking based on machine learning is a promising approach to identify false information distributed on the web. In order to achieve satisfactory performance, machine learning methods require a large corpus with reliable annotations for the different tasks in the fact-checking process. Having analyzed existing fact-checking corpora, we found that none of them meets these criteria in full. They are either too small in size, do not provide detailed annotations, or are limited to a single domain. Motivated by this gap, we present a new substantially sized mixed-domain corpus with annotations of good quality for the core fact-checking tasks: document retrieval, evidence extraction, stance detection, and claim validation. To aid future corpus construction, we describe our methodology for corpus creation and annotation, and demonstrate that it results in substantial inter-annotator agreement. As baselines for future research, we perform experiments on our corpus with a number of model architectures that reach high performance in similar problem settings. Finally, to support the development of future models, we provide a detailed error analysis for each of the tasks. Our results show that the realistic, multi-domain setting defined by our data poses new challenges for the existing models, providing opportunities for considerable improvement by future systems. %R 10.18653/v1/K19-1046 %U https://aclanthology.org/K19-1046 %U https://doi.org/10.18653/v1/K19-1046 %P 493-503