VILLAIN at AVerImaTeC: Verifying Image–Text Claims via Multi-Agent Collaboration

Jaeyoon Jung, Yejun Yoon, Seunghyun Yoon, Kunwoo Park


Abstract
This paper describes VILLAIN, a multimodal fact-checking system that verifies image-text claims through prompt-based multi-agent collaboration. For the AVerImaTeC shared task, VILLAIN employs vision-language model agents across multiple stages of fact-checking. Textual and visual evidence is retrieved from the knowledge store enriched through additional web collection. To identify key information and address inconsistencies among evidence items, modality-specific and cross-modal agents generate analysis reports. In the subsequent stage, question-answer pairs are produced based on these reports. Finally, the Verdict Prediction agent produces the verification outcome based on the image-text claim and the generated question-answer pairs. Our system ranked first on the leaderboard across all evaluation metrics. The source code is publicly available at https://github.com/ssu-humane/VILLAIN.
Anthology ID:
2026.fever-1.9
Volume:
Proceedings of the Ninth Fact Extraction and VERification Workshop (FEVER)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Mubashara Akhtar, Rami Aly, Rui Cao, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, Andreas Vlachos
Venues:
FEVER | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–126
Language:
URL:
https://aclanthology.org/2026.fever-1.9/
DOI:
Bibkey:
Cite (ACL):
Jaeyoon Jung, Yejun Yoon, Seunghyun Yoon, and Kunwoo Park. 2026. VILLAIN at AVerImaTeC: Verifying Image–Text Claims via Multi-Agent Collaboration. In Proceedings of the Ninth Fact Extraction and VERification Workshop (FEVER), pages 114–126, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
VILLAIN at AVerImaTeC: Verifying Image–Text Claims via Multi-Agent Collaboration (Jung et al., FEVER 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.fever-1.9.pdf