%0 Conference Proceedings %T Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning %A Sathe, Aalok %A Park, Joonsuk %Y Aly, Rami %Y Christodoulopoulos, Christos %Y Cocarascu, Oana %Y Guo, Zhijiang %Y Mittal, Arpit %Y Schlichtkrull, Michael %Y Thorne, James %Y Vlachos, Andreas %S Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER) %D 2021 %8 November %I Association for Computational Linguistics %C Dominican Republic %F sathe-park-2021-automatic %X Automatic fact-checking is crucial for recognizing misinformation spreading on the internet. Most existing fact-checkers break down the process into several subtasks, one of which determines candidate evidence sentences that can potentially support or refute the claim to be verified; typically, evidence sentences with gold-standard labels are needed for this. In a more realistic setting, however, such sentence-level annotations are not available. In this paper, we tackle the natural language inference (NLI) subtask—given a document and a (sentence) claim, determine whether the document supports or refutes the claim—only using document-level annotations. Using fine-tuned BERT and multiple instance learning, we achieve 81.9% accuracy, significantly outperforming the existing results on the WikiFactCheck-English dataset. %R 10.18653/v1/2021.fever-1.11 %U https://aclanthology.org/2021.fever-1.11 %U https://doi.org/10.18653/v1/2021.fever-1.11 %P 101-107