Jinglin Chen
2025
Faster In-Context Learning for LLMs via N-Gram Trie Speculative Decoding
Jinglin Chen
|
Qiwei Li
|
Zuchao Li
|
Baoyuan Qi
|
Liu Guoming
|
Haojun Ai
|
Hai Zhao
|
Ping Wang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
As a crucial method in prompt engineering, In-Context Learning (ICL) enhances the generalization and knowledge utilization capabilities of Large Language Models (LLMs) (Dong et al., 2024). However, the lengthy retrieved contexts and limited token throughput in autoregressive models significantly constrain reasoning speed. To address this challenge, we propose N-Gram Trie Speculative Decoding, a novel approach that leverages the overlap between context and model output. This method constructs an n-gram trie from the context to generate drafts, accelerating token generation for LLMs. We evaluate our approach on summarization, Retrieval-Augmented Generation (RAG), and context-based Question Answering (QA) tasks. Experimental results on Vicuna-7B, Llama2-7B-Chat, and Llama3-8B-Instruct demonstrate substantial speed improvements without compromising accuracy. Compared with various strong baselines, our method achieves the highest mean speedup, showcasing its effectiveness and efficiency.
Search
Fix author
Co-authors
- Haojun Ai 1
- Liu Guoming 1
- Qiwei Li 1
- Zuchao Li 1
- Baoyuan Qi 1
- show all...