Sijun Tan
2024
LLoCO: Learning Long Contexts Offline
Sijun Tan
|
Xiuyu Li
|
Shishir G Patil
|
Ziyang Wu
|
Tianjun Zhang
|
Kurt Keutzer
|
Joseph E. Gonzalez
|
Raluca Ada Popa
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. Our method enables an LLM to create a concise representation of the original context and efficiently retrieve relevant information to answer questions accurately. Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning while using 30 × fewer tokens during inference. LLoCO achieves up to 7.62 × speed-up during inference and 11.52 × higher throughput during finetuning, substantially reduces the cost of long document question answering. This makes it a promising solution for efficient long context processing.
Search
Co-authors
- Xiuyu Li 1
- Shishir G Patil 1
- Ziyang Wu 1
- Tianjun Zhang 1
- Kurt Keutzer 1
- show all...