Jiwei Tang


2025

pdf bib
Perception Compressor: A Training-Free Prompt Compression Framework in Long Context Scenarios
Jiwei Tang | Jin Xu | Tingwei Lu | Zhicheng Zhang | YimingZhao YimingZhao | LinHai LinHai | Hai-Tao Zheng
Findings of the Association for Computational Linguistics: NAACL 2025

Large language models (LLMs) demonstrate exceptional capabilities in various scenarios. However, they suffer from much redundant information and are sensitive to the position of key information in long context scenarios. To address these challenges, we present Perception Compressor, a training-free prompt compression framework. It includes a perception retriever that leverages guiding questions and instruction to retrieve the most relevant demonstrations, a dual-slope ratio allocator to dynamically allocate compression ratios and open-book ratios, and a semi-guided iterative compression that retains key information at the token level while removing tokens that distract the LLM. We conduct extensive experiments on long context benchmarks, i.e., NaturalQuestions, LongBench, and MuSiQue. Experiment results show that Perception Compressor outperforms existing methods by a large margin, achieving state-of-the-art performance.

pdf bib
DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
Shaoshen Chen | Yangning Li | Zishan Xu | Yongqin Zeng | Shunlong Wu | Xinshuo Hu | Zifei Shan | Xin Su | Jiwei Tang | Yinghui Li | Hai-Tao Zheng
Findings of the Association for Computational Linguistics: ACL 2025

Large Language Models (LLMs) face computational inefficiencies and redundant processing when handling long context inputs, prompting a focus on compression techniques. While existing semantic vector-based compression methods achieve promising performance, these methods fail to account for the intrinsic information density variations between context chunks, instead allocating soft tokens uniformly across context chunks. This uniform distribution inevitably diminishes allocation to information-critical regions. To address this, we propose Dynamic Allocation of Soft Tokens (DAST), a simple yet effective method that leverages the LLM’s intrinsic understanding of contextual relevance to guide compression. DAST combines perplexity-based local information with attention-driven global information to dynamically allocate soft tokens to the informative-rich chunks, enabling effective, context-aware compression. Experimental results across multiple benchmarks demonstrate that DAST surpasses state-of-the-art methods.

pdf bib
RAISE: Reinforced Adaptive Instruction Selection For Large Language Models
Qingsong Lv | Yangning Li | Zihua Lan | Zishan Xu | Jiwei Tang | Tingwei Lu | Yinghui Li | Wenhao Jiang | Hong-Gee Kim | Hai-Tao Zheng | Philip S. Yu
Findings of the Association for Computational Linguistics: EMNLP 2025

Instruction tuning of large language models (LLMs) benefits more from a handful of high-quality examples than from hordes of low-quality ones. Existing selection methods typically rely on static, heuristic quality scores and are executed only once before training. Consequently, they neither adapt to the changing state of the model nor target downstream objectives, leaving substantial room for optimization. We propose RAISE (**R**einforced **A**daptive **I**nstruction **SE**lection), a *dynamic*, *task-driven* framework that integrates selection into every training step. At each step, RAISE estimates the expected contribution of each candidate instruction to task performance and admits only the most helpful. By modeling this process as sequential decision making, we optimize the selector with reinforcement learning, yielding an interpretable policy specialized for the target task. Extensive experiments show that RAISE reaches comparable or better results than full-data training while updating only 1% of the steps, demonstrating both high efficacy and significant computational savings.