Radostin Cholakov


2024

pdf bib
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Han Guo | William Brandon | Radostin Cholakov | Jonathan Ragan-Kelley | Eric P. Xing | Yoon Kim
Findings of the Association for Computational Linguistics: EMNLP 2024

The deployment of large language models (LLMs) is often constrained by memory bandwidth, where the primary bottleneck is the cost of transferring model parameters from the GPU’s global memory to its registers. When coupled with custom kernels that fuse the dequantization and matmul operations, weight-only quantization can thus enable faster inference by reducing the amount of memory movement. However, developing high-performance kernels for weight-quantized LLMs presents substantial challenges, especially when the weights are compressed to non-evenly-divisible bit widths (e.g., 3 bits) with non-uniform, lookup table (LUT) quantization. This paper describes FLUTE, a flexible lookup table engine for LUT-quantized LLMs, which uses offline restructuring of the quantized weight matrix to minimize bit manipulations associated with unpacking, and vectorization and duplication of the lookup table to mitigate shared memory bandwidth constraints. At batch sizes < 32 and quantization group size of 128 (typical in LLM inference), the FLUTE kernel can be 2-4x faster than existing GEMM kernels. As an application of FLUTE, we explore a simple extension to lookup table-based NormalFloat quantization and apply it to quantize LLaMA3 to various configurations, obtaining competitive quantization performance against strong baselines while obtaining an end-to-end throughput increase of 1.5 to 2 times.

2022

pdf bib
Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task
Radostin Cholakov | Todor Kolev
Proceedings of the 5th International Conference on Natural Language and Speech Processing (ICNLSP 2022)