A Frustratingly Easy Post-Training Quantization Scheme for LLMs

Yongkweon Jeon, Chungman Lee, Kyungphil Park, Ho-young Kim


Abstract
Efficient inference has become crucial for hyper-scale AI models, including large language models, as their parameter count continues to increase for enhanced performance. This necessity holds true regardless of the computing environment, whether it be mobile devices or cloud servers. Quantization emerges as a solution to alleviate the computational burden during inference. By representing models with a reduced bit-width, quantization minimizes the frequency of DRAM access while fully exploiting the parallelism of operations through a dense matrix format. Consequently, quantized models achieve low end-to-end latency and optimize resource utilization by addressing both memory and computing bottlenecks. In this paper, we propose a straightforward post-training quantization scheme, called Z-Fold, that fully utilizes the feature of the Transformer structure widely employed in large language models.
Anthology ID:
2023.emnlp-main.892
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14446–14461
Language:
URL:
https://aclanthology.org/2023.emnlp-main.892
DOI:
10.18653/v1/2023.emnlp-main.892
Bibkey:
Cite (ACL):
Yongkweon Jeon, Chungman Lee, Kyungphil Park, and Ho-young Kim. 2023. A Frustratingly Easy Post-Training Quantization Scheme for LLMs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14446–14461, Singapore. Association for Computational Linguistics.
Cite (Informal):
A Frustratingly Easy Post-Training Quantization Scheme for LLMs (Jeon et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.892.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.892.mp4