Finding Blind Spots in Evaluator LLMs with Interpretable Checklists

Sumanth Doddapaneni, Mohammed Khan, Sshubam Verma, Mitesh Khapra


Abstract
Large Language Models (LLMs) are increasingly relied upon to evaluate text outputs of other LLMs, thereby influencing leaderboards and development decisions. However, concerns persist over the accuracy of these assessments and the potential for misleading conclusions. In this work, we investigate the effectiveness of LLMs as evaluators for text generation tasks. We propose FBI, a novel framework designed to examine the proficiency of Evaluator LLMs in assessing four critical abilities in other LLMs: factual accuracy, instruction following, coherence in long-form writing, and reasoning proficiency. By introducing targeted perturbations in answers generated by LLMs, that clearly impact one of these key capabilities, we test whether an Evaluator LLM can detect these quality drops. By creating a total of 2400 perturbed answers covering 22 perturbation categories, we conduct a comprehensive study using different evaluation strategies on five prominent LLMs commonly used as evaluators in the literature. Our findings reveal significant shortcomings in current Evaluator LLMs, which failed to identify quality drops in over 50% of cases on average. Single-answer and pairwise evaluations demonstrated notable limitations, whereas reference-based evaluations showed comparatively better performance. These results underscore the unreliable nature of current Evaluator LLMs and advocate for cautious implementation in practical applications.
Anthology ID:
2024.emnlp-main.911
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16279–16309
Language:
URL:
https://aclanthology.org/2024.emnlp-main.911
DOI:
Bibkey:
Cite (ACL):
Sumanth Doddapaneni, Mohammed Khan, Sshubam Verma, and Mitesh Khapra. 2024. Finding Blind Spots in Evaluator LLMs with Interpretable Checklists. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 16279–16309, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Finding Blind Spots in Evaluator LLMs with Interpretable Checklists (Doddapaneni et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.911.pdf
Data:
 2024.emnlp-main.911.data.zip