GroUSE: A Benchmark to Evaluate Evaluators in Grounded Question Answering

Sacha Muller, Antonio Loison, Bilel Omrani, Gautier Viaud


Abstract
Retrieval-Augmented Generation (RAG) has emerged as a common paradigm to use Large Language Models (LLMs) alongside private and up-to-date knowledge bases. In this work, we address the challenges of using LLM-as-a-Judge when evaluating grounded answers generated by RAG systems. To assess the calibration and discrimination capabilities of judge models, we identify 7 generator failure modes and introduce GroUSE (Grounded QA Unitary Scoring of Evaluators), a meta-evaluation benchmark of 144 unit tests. This benchmark reveals that existing automated RAG evaluation frameworks often overlook important failure modes, even when using GPT-4 as a judge. To improve on the current design of automated RAG evaluation frameworks, we propose a novel pipeline and find that while closed models perform well on GroUSE, state-of-the-art open-source judges do not generalize to our proposed criteria, despite strong correlation with GPT-4’s judgement. Our findings suggest that correlation with GPT-4 is an incomplete proxy for the practical performance of judge models and should be supplemented with evaluations on unit tests for precise failure mode detection. We further show that finetuning Llama-3 on GPT-4’s reasoning traces significantly boosts its evaluation capabilities, improving upon both correlation with GPT-4’s evaluations and calibration on reference situations
Anthology ID:
2025.coling-main.304
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4510–4534
Language:
URL:
https://aclanthology.org/2025.coling-main.304/
DOI:
Bibkey:
Cite (ACL):
Sacha Muller, Antonio Loison, Bilel Omrani, and Gautier Viaud. 2025. GroUSE: A Benchmark to Evaluate Evaluators in Grounded Question Answering. In Proceedings of the 31st International Conference on Computational Linguistics, pages 4510–4534, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
GroUSE: A Benchmark to Evaluate Evaluators in Grounded Question Answering (Muller et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.304.pdf