George Anthony Constantinides


2025

pdf bib
Training with Fewer Bits: Unlocking Edge LLMs Training with Stochastic Rounding
Taowen Liu | Marta Andronic | Deniz Gunduz | George Anthony Constantinides
Findings of the Association for Computational Linguistics: EMNLP 2025

LLM training is resource-intensive. Quantized training improves computational and memory efficiency but introduces quantization noise, which can hinder convergence and degrade model accuracy. Stochastic Rounding (SR) has emerged as a theoretically attractive alternative to deterministic rounding, offering unbiased gradient estimates. However, its interaction with other training factors—especially batch size—remains underexplored. In this paper, we present a theoretical and empirical study of mini-batch stochastic gradient descent (SGD) with SR, showing that increased batch sizes can compensate for reduced precision during backpropagation. Furthermore, we show that quantizing weights and activations impacts gradient variance in distinct ways. Our experiments validate these theoretical insights. Our experiments validate these theoretical insights.