Weidong Wen
2024
CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification
Junhui He
|
Shangyu Wu
|
Weidong Wen
|
Chun Jason Xue
|
Qingan Li
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Deploying large language models (LLMs) on edge devices presents significant challenges due to the substantial computational overhead and memory requirements. Activation sparsification can mitigate these resource challenges by reducing the number of activated neurons during inference. Existing methods typically employ thresholding-based sparsification based on the statistics of activation tensors. However, they do not model the impact of activation sparsification on performance, resulting in suboptimal performance degradation. To address the limitations, this paper reformulates the activation sparsification problem to explicitly capture the relationship between activation sparsity and model performance. Then, this paper proposes CHESS, a general activation sparsification approach via CHannel-wise thrEsholding and Selective Sparsification. First, channel-wise thresholding assigns a unique threshold to each activation channel in the feed-forward network (FFN) layers. Then, selective sparsification involves applying thresholding-based activation sparsification to specific layers within the attention modules. Finally, we detail the implementation of sparse kernels to accelerate LLM inference. Experimental results demonstrate that the proposed CHESS achieves lower performance degradation over eight downstream tasks while activating fewer parameters than existing methods, thus speeding up the LLM inference by up to 1.27x.
2009
A Novel Method of Sentence Ordering Based on Support Vector Machine
Gongfu Peng
|
Yanxiang He
|
Ye Tian
|
Yingsheng Tian
|
Weidong Wen
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2
Search
Co-authors
- Junhui He 1
- Shangyu Wu 1
- Chun Jason Xue 1
- Qingan Li 1
- Gongfu Peng 1
- show all...