Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?

Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George Constantinides, Yiren Zhao


Abstract
The inference of Large language models (LLMs) requires immense computation and memory resources. To curtail these costs, quantisation has emerged as a promising solution, but existing LLM quantisation mainly focuses on 8-bit. In this work, we explore the statistical and learning properties of the LLM layer and attribute the bottleneck of LLM quantisation to numerical scaling offsets. To address this, we adapt block quantisations for LLMs, a family of methods that share scaling factors across packed numbers. Block quantisations efficiently reduce the numerical scaling offsets solely from an arithmetic perspective, without additional treatments in the computational path. Our nearly-lossless quantised 6-bit LLMs achieve a 19× higher arithmetic density and memory density than the float32 baseline, surpassing the prior art 8-bit quantisation by 2.5× in arithmetic density and 1.2× in memory density, without requiring any data calibration or re-training. We also share our insights into sub-8-bit LLM quantisation, including the mismatch between activation and weight distributions, optimal fine-tuning strategies, and a lower quantisation granularity inherent in the statistical properties of LLMs. The latter two tricks enable nearly-lossless 4-bit LLMs on downstream tasks. Our code is open-sourced.
Anthology ID:
2023.emnlp-main.617
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9988–10006
Language:
URL:
https://aclanthology.org/2023.emnlp-main.617
DOI:
10.18653/v1/2023.emnlp-main.617
Bibkey:
Cite (ACL):
Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George Constantinides, and Yiren Zhao. 2023. Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9988–10006, Singapore. Association for Computational Linguistics.
Cite (Informal):
Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference? (Zhang et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.617.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.617.mp4