Fact Checking Beyond Training Set

Payam Karisani, Heng Ji


Abstract
Evaluating the veracity of everyday claims is time consuming and in some cases requires domain expertise. We empirically demonstrate that the commonly used fact checking pipeline, known as the retriever-reader, suffers from performance deterioration when it is trained on the labeled data from one domain and used in another domain. Afterwards, we delve into each component of the pipeline and propose novel algorithms to address this problem. We propose an adversarial algorithm to make the retriever component robust against distribution shift. Our core idea is to initially train a bi-encoder on the labeled source data, and then, to adversarially train two separate document and claim encoders using unlabeled target data. We then focus on the reader component and propose to train it such that it is insensitive towards the order of claims and evidence documents. Our empirical evaluations support the hypothesis that such a reader shows a higher robustness against distribution shift. To our knowledge, there is no publicly available multi-topic fact checking dataset. Thus, we propose a simple automatic method to re-purpose two well-known fact checking datasets. We then construct eight fact checking scenarios from these datasets, and compare our model to a set of strong baseline models, including recent domain adaptation models that use GPT4 for generating synthetic data.
Anthology ID:
2024.naacl-long.124
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2247–2261
Language:
URL:
https://aclanthology.org/2024.naacl-long.124
DOI:
Bibkey:
Cite (ACL):
Payam Karisani and Heng Ji. 2024. Fact Checking Beyond Training Set. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 2247–2261, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Fact Checking Beyond Training Set (Karisani & Ji, NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.124.pdf
Copyright:
 2024.naacl-long.124.copyright.pdf