Seojin Kim
2025
Mamba Drafters for Speculative Decoding
Daewon Choi
|
Seunghyuk Oh
|
Saket Dingliwal
|
Jihoon Tack
|
Kyuyoung Kim
|
Woomin Song
|
Seojin Kim
|
Insu Han
|
Jinwoo Shin
|
Aram Galstyan
|
Shubham Katiyar
|
Sravan Babu Bodapati
Findings of the Association for Computational Linguistics: EMNLP 2025
Speculative decoding has emerged as a promising approach to accelerating large language model (LLM) generation using a fast drafter while maintaining alignment with the target model’s distribution. However, existing approaches face a trade-off: external drafters offer flexibility but can suffer from slower drafting, while self-speculation methods use drafters tailored to the target model but require re-training. In this paper, we introduce novel drafters based on Mamba, a state-of-the-art state space model (SSM), as a solution that combines the best aspects of both approaches. By leveraging the linear structure of SSMs, our approach avoids the quadratic complexity inherent in traditional Transformer-based methods, enabling faster drafting and lower memory usage while maintaining the flexibility to work across different target models. We further enhance efficiency with a novel test-time tree search algorithm for generating high-quality draft candidates. Our empirical evaluation demonstrates that Mamba-based drafters not only outperform existing external drafting methods but are also comparable to state-of-the-art self-speculation approaches while using less memory and maintaining their cross-model adaptability.
Training Text-to-Molecule Models with Context-Aware Tokenization
Seojin Kim
|
Hyeontae Song
|
Jaehyun Nam
|
Jinwoo Shin
Findings of the Association for Computational Linguistics: EMNLP 2025
Recently, text-to-molecule models have shown great potential across various chemical applications, e.g., drug-discovery. These models adapt language models to molecular data by representing molecules as sequences of atoms. However, they rely on atom-level tokenizations, which primarily focus on modeling local connectivity, thereby limiting the ability of models to capture the global structural context within molecules. To tackle this issue, we propose a novel text-to-molecule model, coined Context-Aware Molecular T5 (CAMT5). Inspired by the significance of the substructure-level contexts in understanding molecule structures, e.g., ring systems, we introduce substructure-level tokenization for text-to-molecule models. Building on our tokenization scheme, we develop an importance-based training strategy that prioritizes key substructures, enabling CAMT5 to better capture the molecular semantics. Extensive experiments verify the superiority of CAMT5 in various text-to-molecule generation tasks. Intriguingly, we find that CAMT5 outperforms the state-of-the-art methods using only 2% of training tokens. In addition, we propose a simple yet effective ensemble strategy that aggregates the outputs of text-to-molecule models to further boost the generation performance.
Search
Fix author
Co-authors
- Jinwoo Shin 2
- Sravan Babu Bodapati 1
- Daewon Choi 1
- Saket Dingliwal 1
- Aram Galstyan 1
- show all...