Yichuan Ma
2025
Case2Code: Scalable Synthetic Data for Code Generation
Yunfan Shao
|
Linyang Li
|
Yichuan Ma
|
Peiji Li
|
Demin Song
|
Qinyuan Cheng
|
Shimin Li
|
Xiaonan Li
|
Pengyu Wang
|
Qipeng Guo
|
Hang Yan
|
Xipeng Qiu
|
Xuanjing Huang
|
Dahua Lin
Proceedings of the 31st International Conference on Computational Linguistics
Large Language Models (LLMs) have shown outstanding breakthroughs in code generation. Recent work improves code LLMs by training on synthetic data generated by some powerful LLMs, which can be challenging to scale due to the dependence on a teacher model and high generation costs. In this paper, we focus on synthesizing code data at scale and propose a Case2Code task by exploiting the expressiveness and correctness of programs. Case2Code is an inductive inference task that aims to infer underlying code implementations by observing input-output examples or program behaviors, By incorporating LLMs to generate program inputs, and executing the program with these inputs to obtain the program outputs, we can synthesize diverse and high-quality Case2Code data at scale for training and evaluating code LLMs. Experimental results show that case-to-code induction is challenging for current representative LLMs if they are untrained. Models trained with Case2Code improve performance not only on distribution case-to-code induction but also various coding-generation tasks, demonstrating the great potential of large-scale synthetic data and inductive learning.
Search
Fix data
Co-authors
- Qinyuan Cheng 1
- Qipeng Guo 1
- Xuan-Jing Huang (黄萱菁) 1
- Linyang Li 1
- Peiji Li 1
- show all...