Seunghyuk Oh
2026
TABED: Test-Time Adaptive Ensemble Drafting for Robust Speculative Decoding in LVLMs
Minjae Lee | Wonjun Kang | Byeongkeun Ahn | Christian Classen | Kevin Galim | Seunghyuk Oh | Minghao Yan | Hyung Il Koo | Kangwook Lee
Findings of the Association for Computational Linguistics: EACL 2026
Minjae Lee | Wonjun Kang | Byeongkeun Ahn | Christian Classen | Kevin Galim | Seunghyuk Oh | Minghao Yan | Hyung Il Koo | Kangwook Lee
Findings of the Association for Computational Linguistics: EACL 2026
Speculative decoding (SD) has proven effective for accelerating LLM inference by quickly generating draft tokens and verifying them in parallel. However, SD remains largely unexplored for Large Vision-Language Models (LVLMs), which extend LLMs to process both image and text prompts. To address this gap, we benchmark existing inference methods with small draft models on 11 datasets across diverse input scenarios and observe scenario-specific performance fluctuations. Motivated by these findings, we propose **Test-time Adaptive Batched Ensemble Drafting (TABED)**, which dynamically ensembles multiple drafts obtained via batch inference by leveraging deviations from past ground truths available in the SD setting. The dynamic ensemble method achieves an average robust walltime speedup of 1.74× over autoregressive decoding and a 5% improvement over single drafting methods, while remaining training-free and keeping ensembling costs negligible through parameter sharing. With its plug-and-play compatibility, we further enhance TABED by integrating advanced verification and alternative drafting methods. Code and custom-trained models are available at https://github.com/furiosa-ai/TABED.
2025
Mamba Drafters for Speculative Decoding
Daewon Choi | Seunghyuk Oh | Saket Dingliwal | Jihoon Tack | Kyuyoung Kim | Woomin Song | Seojin Kim | Insu Han | Jinwoo Shin | Aram Galstyan | Shubham Katiyar | Sravan Babu Bodapati
Findings of the Association for Computational Linguistics: EMNLP 2025
Daewon Choi | Seunghyuk Oh | Saket Dingliwal | Jihoon Tack | Kyuyoung Kim | Woomin Song | Seojin Kim | Insu Han | Jinwoo Shin | Aram Galstyan | Shubham Katiyar | Sravan Babu Bodapati
Findings of the Association for Computational Linguistics: EMNLP 2025
Speculative decoding has emerged as a promising approach to accelerating large language model (LLM) generation using a fast drafter while maintaining alignment with the target model’s distribution. However, existing approaches face a trade-off: external drafters offer flexibility but can suffer from slower drafting, while self-speculation methods use drafters tailored to the target model but require re-training. In this paper, we introduce novel drafters based on Mamba, a state-of-the-art state space model (SSM), as a solution that combines the best aspects of both approaches. By leveraging the linear structure of SSMs, our approach avoids the quadratic complexity inherent in traditional Transformer-based methods, enabling faster drafting and lower memory usage while maintaining the flexibility to work across different target models. We further enhance efficiency with a novel test-time tree search algorithm for generating high-quality draft candidates. Our empirical evaluation demonstrates that Mamba-based drafters not only outperform existing external drafting methods but are also comparable to state-of-the-art self-speculation approaches while using less memory and maintaining their cross-model adaptability.