ICE-Score: Instructing Large Language Models to Evaluate Code

Terry Yue Zhuo


Abstract
Recent advancements in the field of natural language generation have facilitated the use of large language models to assess the quality of generated text. Although these models have shown promising results in tasks such as machine translation and summarization, their applicability in code intelligence tasks remains limited without human involvement. The complexity of programming concepts required for such tasks makes it difficult to develop evaluation metrics that align with human judgment. Token-matching-based metrics, such as BLEU, have demonstrated weak correlations with human practitioners in code intelligence tasks. Moreover, utilizing human-written test suites to evaluate functional correctness can be challenging in domains with low resources. To overcome these obstacles, we propose ICE-Score, a new evaluation metric via instructing large language models (LLMs) for code assessments. Our metric addresses the limitations of existing approaches by achieving superior correlations with functional correctness and human preferences, without the need for test oracles or references. We evaluate the efficacy of our metric on two different aspects (human preference and execution success) and four programming languages. Our results demonstrate that our metric surpasses state-of-the-art metrics for code generation, delivering high levels of accuracy and consistency across various programming languages and tasks. We also make our evaluation metric and datasets available to the public, encouraging further research in evaluating code intelligence tasks.
Anthology ID:
2024.findings-eacl.148
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2232–2242
Language:
URL:
https://aclanthology.org/2024.findings-eacl.148
DOI:
Bibkey:
Cite (ACL):
Terry Yue Zhuo. 2024. ICE-Score: Instructing Large Language Models to Evaluate Code. In Findings of the Association for Computational Linguistics: EACL 2024, pages 2232–2242, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
ICE-Score: Instructing Large Language Models to Evaluate Code (Zhuo, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.148.pdf