Bocheng Li
2024
Few-shot Temporal Pruning Accelerates Diffusion Models for Text Generation
Bocheng Li
|
Zhujin Gao
|
Yongxin Zhu
|
Kun Yin
|
Haoyu Cao
|
Deqiang Jiang
|
Linli Xu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Diffusion models have achieved significant success in computer vision and shown immense potential in natural language processing applications, particularly for text generation tasks. However, generating high-quality text using these models often necessitates thousands of iterations, leading to slow sampling rates. Existing acceleration methods either neglect the importance of the distribution of sampling steps, resulting in compromised performance with smaller number of iterations, or require additional training, introducing considerable computational overheads. In this paper, we present Few-shot Temporal Pruning, a novel technique designed to accelerate diffusion models for text generation without supplementary training while effectively leveraging limited data. Employing a Bayesian optimization approach, our method effectively eliminates redundant sampling steps during the sampling process, thereby enhancing the generation speed. A comprehensive evaluation of discrete and continuous diffusion models across various tasks, including machine translation, question generation, and paraphrasing, reveals that our approach achieves competitive performance even with minimal sampling steps after down to less than 1 minute of optimization, yielding a significant acceleration of up to 400x in text generation tasks.
Search
Co-authors
- Zhujin Gao 1
- Yongxin Zhu 1
- Kun Yin 1
- Haoyu Cao 1
- Deqiang Jiang 1
- show all...
- Linli Xu 1