Xueling Liu
2024
OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement
Tianyu Zheng
|
Ge Zhang
|
Tianhao Shen
|
Xueling Liu
|
Bill Yuchen Lin
|
Jie Fu
|
Wenhu Chen
|
Xiang Yue
Findings of the Association for Computational Linguistics: ACL 2024
The introduction of large language models has significantly advanced code generation. However, open-source models often lack the execution capabilities and iterative refinement of advanced systems like the GPT-4 Code Interpreter. To address this, we introduce OpenCodeInterpreter, a family of open-source code systems designed for generating, executing, and iteratively refining code. Supported by Code Feedback, a dataset featuring 68K multi-turn interactions, OpenCodeInterpreter integrates execution and human feedback for dynamic code refinement. Our comprehensive evaluation of OpenCodeInterpreter across key benchmarks such as HumanEval, MBPP, and their enhanced versions from EvalPlus reveals its exceptional performance. Notably, OpenCodeInterpreter-33B achieves an accuracy of 83.2 (76.4) on the average (and plus versions) of HumanEval and MBPP, closely rivaling GPT-4’s 84.2 (76.2) and further elevates to 91.6 (84.6) with synthesized human feedback from GPT-4. OpenCodeInterpreterbrings the gap between open-source code generation models and proprietary systems like GPT-4 Code Interpreter.
Search
Co-authors
- Tianyu Zheng 1
- Ge Zhang 1
- Tianhao Shen 1
- Bill Yuchen Lin 1
- Jie Fu 1
- show all...