Zero-Shot Fact-Checking with Semantic Triples and Knowledge Graphs

Moy Yuan, Andreas Vlachos


Abstract
Despite progress in automated fact-checking, most systems require a significant amount of labeled training data, which is expensive. In this paper, we propose a novel zero-shot method, which instead of operating directly on the claim and evidence sentences, decomposes them into semantic triples augmented using external knowledge graphs, and uses large language models trained for natural language inference. This allows it to generalize to adversarial datasets and domains that supervised models require specific training data for. Our empirical results show that our approach outperforms previous zero-shot approaches on FEVER, FEVER-Symmetric, FEVER 2.0, and Climate-FEVER, while being comparable or better than supervised models on the adversarial and the out-of-domain datasets.
Anthology ID:
2024.kallm-1.11
Volume:
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Russa Biswas, Lucie-Aimée Kaffee, Oshin Agarwal, Pasquale Minervini, Sameer Singh, Gerard de Melo
Venues:
KaLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
105–115
Language:
URL:
https://aclanthology.org/2024.kallm-1.11
DOI:
10.18653/v1/2024.kallm-1.11
Bibkey:
Cite (ACL):
Moy Yuan and Andreas Vlachos. 2024. Zero-Shot Fact-Checking with Semantic Triples and Knowledge Graphs. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024), pages 105–115, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Fact-Checking with Semantic Triples and Knowledge Graphs (Yuan & Vlachos, KaLLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.kallm-1.11.pdf