CoVeGAT: A Hybrid LLM & Graph‐Attention Pipeline for Accurate Citation‐Aligned Claim Verification

Max Bader, Akshatha Arunkumar, Ohan Ahmad, Maruf Hassen, Charles Duong, Vasu Sharma, Sean O’Brien, Kevin Zhu


Abstract
Modern LLMs often generate fluent text yet fabricate, misquote, or misattribute evidence. To quantify this flaw, we built a balanced Citation‐Alignment Dataset of 500 genuine, expert‐verified claim–quote pairs and 500 minimally perturbed false variants from news, legal, scientific, and literary sources. We then propose CoVeGAT, which converts claims and citations into SVO triplets (with trigram fallback), scores each pair via an LLM‐driven chain of verification, and embeds them in a weighted semantic graph. A Graph Attention Network over BERT embeddings issues strict pass/fail judgments on alignment. Zero‐shot evaluation of seven top LLMs (e.g., GPT‐4o, Gemini 1.5, Mistral 7B) reveals a trade‐off: decisive models reach 82.5 % accuracy but err confidently, while cautious ones fall below 50 %. A MiniLM + RBF kernel baseline, by contrast, achieves 96.4 % accuracy, underscoring the power of simple, interpretable methods.
Anthology ID:
2025.r2lm-1.1
Volume:
Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Alicia Picazo-Izquierdo, Ernesto Luis Estevanell-Valladares, Ruslan Mitkov, Rafael Muñoz Guillena, Raúl García Cerdá
Venues:
R2LM | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1–9
Language:
URL:
https://aclanthology.org/2025.r2lm-1.1/
DOI:
Bibkey:
Cite (ACL):
Max Bader, Akshatha Arunkumar, Ohan Ahmad, Maruf Hassen, Charles Duong, Vasu Sharma, Sean O’Brien, and Kevin Zhu. 2025. CoVeGAT: A Hybrid LLM & Graph‐Attention Pipeline for Accurate Citation‐Aligned Claim Verification. In Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models, pages 1–9, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
CoVeGAT: A Hybrid LLM & Graph‐Attention Pipeline for Accurate Citation‐Aligned Claim Verification (Bader et al., R2LM 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.r2lm-1.1.pdf