Cong Yan
2022
Execution-based Evaluation for Data Science Code Generation Models
Junjie Huang
|
Chenglong Wang
|
Jipeng Zhang
|
Cong Yan
|
Haotian Cui
|
Jeevana Priya Inala
|
Colin Clement
|
Nan Duan
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)
Code generation models can benefit data scientists’ productivity by automatically generating code from context and text descriptions. An important measure of the modeling progress is whether a model can generate code that can correctly execute to solve the task. However, due to the lack of an evaluation dataset that directly supports execution-based model evaluation, existing work relies on code surface form similarity metrics (e.g., BLEU, CodeBLEU) for model selection, which can be inaccurate. To remedy this, we introduce ExeDS, an evaluation dataset for execution evaluation for data science code generation tasks. ExeDS contains a set of 534 problems from Jupyter Notebooks, each consisting of code context, task description, reference program, and the desired execution output. With ExeDS, we evaluate the execution performance of five state-of-the-art code generation models that have achieved high surface-form evaluation scores. Our experiments show that models with high surface-form scores do not necessarily perform well on execution metrics, and execution-based metrics can better capture model code generation errors. All the code and data will be released upon acceptance.
Search
Co-authors
- Junjie Huang 1
- Chenglong Wang 1
- Jipeng Zhang 1
- Haotian Cui 1
- Jeevana Priya Inala 1
- show all...
Venues
- dash1