ZigZagKV: Dynamic KV Cache Compression for Long-context Modeling based on Layer Uncertainty

Meizhi Zhong, Xikai Liu, Chen Zhang, Yikun Lei, Yan Gao, Yao Hu, Kehai Chen, Min Zhang


Abstract
Large Language models (LLMs) have become a research hotspot. To accelerate the inference of LLMs, storing computed caches in memory has become the standard technique. However, as the inference length increases, growing KV caches might lead to out-of-memory issues. Many existing methods address this issue through KV cache compression, primarily by preserving key tokens throughout all layers to reduce information loss. Most of them allocate a uniform budget size for each layer to retain. However, we observe that the minimum budget sizes needed to retain essential information vary across layers and models based on the perspectives of attention and hidden state output. Building on this observation, this paper proposes a simple yet effective KV cache compression method that leverages layer uncertainty to allocate budget size for each layer. Experimental results show that the proposed method can reduce memory usage of the KV caches to only ~20% when compared to full KV inference while achieving nearly lossless performance.
Anthology ID:
2025.coling-main.596
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8897–8907
Language:
URL:
https://aclanthology.org/2025.coling-main.596/
DOI:
Bibkey:
Cite (ACL):
Meizhi Zhong, Xikai Liu, Chen Zhang, Yikun Lei, Yan Gao, Yao Hu, Kehai Chen, and Min Zhang. 2025. ZigZagKV: Dynamic KV Cache Compression for Long-context Modeling based on Layer Uncertainty. In Proceedings of the 31st International Conference on Computational Linguistics, pages 8897–8907, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
ZigZagKV: Dynamic KV Cache Compression for Long-context Modeling based on Layer Uncertainty (Zhong et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.596.pdf