Yanxu Chen
2024
How Do Your Code LLMs perform? Empowering Code Instruction Tuning with Really Good Data
Yejie Wang
|
Keqing He
|
Dayuan Fu
|
Zhuoma GongQue
|
Heyang Xu
|
Yanxu Chen
|
Zhexu Wang
|
Yujia Fu
|
Guanting Dong
|
Muxi Diao
|
Jingang Wang
|
Mengdi Zhang
|
Xunliang Cai
|
Weiran Xu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recently, there has been a growing interest in studying how to construct better code instruction tuning data. However, we observe Code models trained with these datasets exhibit high performance on HumanEval but perform worse on other benchmarks such as LiveCodeBench. Upon further investigation, we find that many datasets suffer from severe data leakage. After cleaning up most of the leaked data, some well-known high-quality datasets perform poorly. This discovery reveals a new challenge: identifying which dataset genuinely qualify as high-quality code instruction data. To address this, we propose an efficient code data pruning strategy for selecting good samples. Our approach is based on three dimensions: instruction complexity, response quality, and instruction diversity. Based on our selected data, we present XCoder, a family of models finetuned from LLaMA3. Our experiments show Xcoder achieves new state-of-the-art performance using fewer training data, which verify the effectiveness of our data strategy. Moreover, we perform a comprehensive analysis on the data composition and find existing code datasets have different characteristics according to their construction methods, which provide new insights for future code LLMs.
2022
BMCook: A Task-agnostic Compression Toolkit for Big Models
Zhengyan Zhang
|
Baitao Gong
|
Yingfa Chen
|
Xu Han
|
Guoyang Zeng
|
Weilin Zhao
|
Yanxu Chen
|
Zhiyuan Liu
|
Maosong Sun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Recently, pre-trained language models (PLMs) have achieved great success on various NLP tasks and have shown a trend of exponential growth in model size. To alleviate the unaffordable computational costs brought by the size growth, model compression has been widely explored. Existing efforts have achieved promising results in compressing medium-sized models for specific tasks, while task-agnostic compression for big models with over billions of parameters is rarely studied. Task-agnostic compression can provide an efficient and versatile big model for both prompting and delta tuning, leading to a more general impact than task-specific compression. Hence, we introduce a task-agnostic compression toolkit BMCook for big models. In BMCook, we implement four representative compression methods, including quantization, pruning, distillation, and MoEfication. Developers can easily combine these methods towards better efficiency. To evaluate BMCook, we apply it to compress T5-3B (a PLM with 3 billion parameters). We achieve nearly 12x efficiency improvement while maintaining over 97% of the original T5-3B performance on three typical NLP benchmarks. Moreover, the final compressed model also significantly outperforms T5-base (a PLM with 220 million parameters), which has a similar computational cost. BMCook is publicly available at https://github.com/OpenBMB/BMCook.
Search
Co-authors
- Yejie Wang 1
- Keqing He 1
- Dayuan Fu 1
- Zhuoma GongQue 1
- Heyang Xu 1
- show all...