The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate”

Juhyun Oh, Eunsu Kim, Inha Cha, Alice Oh


Abstract
This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators. We assess the performance of three LLMs and one open-source LM in Question-Answering (QA) and evaluation tasks using the TriviaQA (Joshi et al., 2017) dataset. Results indicate a significant disparity, with LLMs exhibiting lower performance in evaluation tasks compared to generation tasks. Intriguingly, we discover instances of unfaithful evaluation where models accurately evaluate answers in areas where they lack competence, underscoring the need to examine the faithfulness and trustworthiness of LLMs as evaluators. This study contributes to the understanding of “the Generative AI Paradox” (West et al., 2023), highlighting a need to explore the correlation between generative excellence and evaluation proficiency, and the necessity to scrutinize the faithfulness aspect in model evaluations.
Anthology ID:
2024.eacl-srw.19
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Neele Falk, Sara Papi, Mike Zhang
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
248–257
Language:
URL:
https://aclanthology.org/2024.eacl-srw.19
DOI:
Bibkey:
Cite (ACL):
Juhyun Oh, Eunsu Kim, Inha Cha, and Alice Oh. 2024. The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate”. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 248–257, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate” (Oh et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-srw.19.pdf