Jianwen Luo
2024
DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models
Yiming Huang
|
Jianwen Luo
|
Yan Yu
|
Yitong Zhang
|
Fangyu Lei
|
Yifan Wei
|
Shizhu He
|
Lifu Huang
|
Xiao Liu
|
Jun Zhao
|
Kang Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
We introduce DA-Code, a code generation benchmark specifically designed to assess LLMs on agent-based data science tasks. This benchmark features three core elements: First, the tasks within DA-Code are inherently challenging, setting them apart from traditional code generation tasks and demanding advanced coding skills in grounding and planning. Second, examples in DA-Code are all based on real and diverse data, covering a wide range of complex data wrangling and analytics tasks. Third, to solve the tasks, the models must utilize complex data science programming languages, including Python and SQL, to perform intricate data processing and derive the answers. We set up the benchmark in a controllable and executable environment that aligns with real-world data analysis scenarios and is scalable. The annotators meticulously designed the evaluation suite to ensure the accuracy and robustness of the evaluation. We developed the DA-Agent baseline. Experiments show that although the baseline performs better than other existing frameworks, using the current best LLMs achieves only 30.5% accuracy, leaving ample room for improvement. We release our benchmark at [link](https://github.com/yiyihum/dabench)
Search
Co-authors
- Yiming Huang 1
- Yan Yu 1
- Yitong Zhang 1
- Fangyu Lei 1
- Yifan Wei 1
- show all...