Shiyu Ji
Papers on this page may belong to the following people: Shiyu Ji
2025
Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query
Yixuan Wang | Shiyu Ji | Yijun Liu | Yuzhuang Xu | Yang Xu | Qingfu Zhu | Wanxiang Che
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yixuan Wang | Shiyu Ji | Yijun Liu | Yuzhuang Xu | Yang Xu | Qingfu Zhu | Wanxiang Che
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) rely on key-value cache (KV cache) to accelerate decoding by reducing redundant computations. However, the KV cache memory usage grows substantially with longer text sequences, posing challenges for efficient deployment. Existing KV cache eviction methods prune tokens using prefilling-stage attention scores, causing inconsistency with actual inference queries, especially under tight memory budgets. In this paper, we propose Lookahead Q-Cache (LAQ), a novel eviction framework that generates low-cost pseudo lookahead queries to better approximate the true decoding-stage queries. By using these lookahead queries as the observation window for importance estimation, LAQ achieves more consistent and accurate KV cache eviction aligned with real inference scenarios. Experimental results on LongBench and Needle-in-a-Haystack benchmarks show that LAQ outperforms existing methods across various budget levels, achieving a 1 4 point improvement on LongBench under limited cache budget. Moreover, LAQ is complementary to existing approaches and can be flexibly combined to yield further improvements.
CRVQ: Channel-Relaxed Vector Quantization for Extreme Compression of LLMs
Yuzhuang Xu | Shiyu Ji | Qingfu Zhu | Wanxiang Che
Transactions of the Association for Computational Linguistics, Volume 13
Yuzhuang Xu | Shiyu Ji | Qingfu Zhu | Wanxiang Che
Transactions of the Association for Computational Linguistics, Volume 13
Powerful large language models (LLMs) are increasingly expected to be deployed with lower computational costs, enabling their capabilities on resource-constrained devices. Post-training quantization (PTQ) has emerged as a star approach to achieve this ambition, with best methods compressing weights to less than 2 bit on average. In this paper, we propose Channel-Relaxed Vector Quantization (CRVQ), a novel technique that significantly improves the performance of PTQ baselines at the cost of only minimal additional bits. This state-of-the-art extreme compression method achieves its results through two key innovations: (1) carefully selecting and reordering a very small subset of critical weight channels, and (2) leveraging extended codebooks to relax the constraint of critical channels. With our method, we demonstrate a 38.9% improvement over the current strongest sub-2-bit PTQ baseline, enabling nearer lossless 1-bit compression. Furthermore, our approach offers flexible customization of quantization bit-width and performance, providing a wider range of deployment options for diverse hardware platforms. Code and checkpoints are available at https://github.com/xuyuzhuang11/CRVQ.