EasyJudge: an Easy-to-use Tool for Comprehensive Response Evaluation of LLMs

Yijie Li, Yuan Sun


Abstract
Recently, there has been a growing trend of employing large language models (LLMs) to judge the quality of other LLMs. Many studies have adopted closed-source models, mainly using GPT-4 as the evaluator. However, due to the closed-source nature of the GPT-4 model, employing it as an evaluator has resulted in issues including transparency, controllability, and cost-effectiveness. Some researchers have turned to using fine-tuned open-source LLMs as evaluators. However, existing open-source evaluation LLMs generally lack a user-friendly visualization tool, and they have not been optimized for accelerated model inference, which causes inconvenience for researchers with limited resources and those working across different fields. This paper presents EasyJudge, a model developed to evaluate significant language model responses. It is lightweight, precise, efficient, and user-friendly, featuring an intuitive visualization interface for ease of deployment and use. EasyJudge uses detailed datasets and refined prompts for model optimization, achieving strong consistency with human and proprietary model evaluations. The model optimized with quantitative methods enables EasyJudge to run efficiently on consumer-grade GPUs or even CPUs.
Anthology ID:
2025.coling-demos.10
Volume:
Proceedings of the 31st International Conference on Computational Linguistics: System Demonstrations
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert, Brodie Mather, Mark Dras
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–103
Language:
URL:
https://aclanthology.org/2025.coling-demos.10/
DOI:
Bibkey:
Cite (ACL):
Yijie Li and Yuan Sun. 2025. EasyJudge: an Easy-to-use Tool for Comprehensive Response Evaluation of LLMs. In Proceedings of the 31st International Conference on Computational Linguistics: System Demonstrations, pages 91–103, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
EasyJudge: an Easy-to-use Tool for Comprehensive Response Evaluation of LLMs (Li & Sun, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-demos.10.pdf