Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking

Dina Pisarevskaya, Arkaitz Zubiaga


Abstract
The claim matching (CM) task can benefit an automated fact-checking pipeline by putting together claims that can be resolved with the same fact-check. In this work, we are the first to explore zero-shot and few-shot learning approaches to the task. We consider CM as a binary classification task and experiment with a set of instruction-following large language models (GPT-3.5-turbo, Gemini-1.5-flash, Mistral-7B-Instruct, and Llama-3-8B-Instruct), investigating prompt templates. We introduce a new CM dataset, ClaimMatch, which will be released upon acceptance. We put LLMs to the test in the CM task and find out that it can be tackled by leveraging more mature yet similar tasks such as natural language inference or paraphrase detection. We also propose a pipeline for CM, which we evaluate on texts of different lengths.
Anthology ID:
2025.coling-main.650
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9721–9736
Language:
URL:
https://aclanthology.org/2025.coling-main.650/
DOI:
Bibkey:
Cite (ACL):
Dina Pisarevskaya and Arkaitz Zubiaga. 2025. Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9721–9736, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking (Pisarevskaya & Zubiaga, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.650.pdf