AdaTP: Attention-Debiased Token Pruning for Video Large Language Models

Fengyuan Sun, Leqi Shen, Hui Chen, Sicheng Zhao, Jungong Han, Guiguang Ding


Abstract
Video Large Language Models (Video LLMs) have achieved remarkable results in video understanding tasks. However, they often suffer from heavy computational overhead due to the large number of visual tokens generated from multiple video frames. Existing visual token compression methods often rely on attention scores from language models as guidance. However, these scores exhibit inherent biases: global bias reflects a tendency to focus on the two ends of the visual token sequence, while local bias leads to an over-concentration on the same spatial positions across different frames. To address the issue of attention bias, we propose Attention-Debiased Token Pruning for Video Large Language Models(AdaTP), a novel token pruning pipeline for Video LLMs. AdaTP integrates two dedicated debiasing modules into the pipeline, targeting global attention bias and local attention bias, respectively. Without the need for additional training, our method significantly reduces the computational overhead of Video LLMs while retaining the performance of vanilla models. Extensive evaluation shows that AdaTP achieves state-of-the-art performance in various commonly used video understanding benchmarks. In particular, on LLaVA-OneVision-7B, AdaTP maintains performance without degradation while using only up to 27.3% FLOPs compared to the vanilla model. Our code will be released soon.
Anthology ID:
2025.findings-emnlp.174
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3273–3286
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.174/
DOI:
Bibkey:
Cite (ACL):
Fengyuan Sun, Leqi Shen, Hui Chen, Sicheng Zhao, Jungong Han, and Guiguang Ding. 2025. AdaTP: Attention-Debiased Token Pruning for Video Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 3273–3286, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
AdaTP: Attention-Debiased Token Pruning for Video Large Language Models (Sun et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.174.pdf
Checklist:
 2025.findings-emnlp.174.checklist.pdf