SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding

Zhenglin Wang, Jialong Wu, Yilong Lai, Congzhi Zhang, Deyu Zhou


Abstract
Large Language Models (LLMs) demonstrate remarkable emergent abilities across various tasks, yet fall short of complex reasoning and planning tasks. The tree-search-based reasoning methods address this by encouraging the exploration of intermediate steps, surpassing the capabilities of chain-of-thought prompting. However, significant inference latency is introduced due to the systematic exploration and evaluation of multiple thought paths. This paper introduces SEED, a novel and efficient inference framework to improve both runtime speed and GPU memory management concurrently. Based on a scheduled speculative execution, SEED efficiently handles multiple iterations for thought generation and state evaluation, leveraging a rounds-scheduled strategy to manage draft model dispatching. Extensive experimental evaluations on three reasoning datasets demonstrate the superior speedup performance of SEED.
Anthology ID:
2025.coling-main.328
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4920–4937
Language:
URL:
https://aclanthology.org/2025.coling-main.328/
DOI:
Bibkey:
Cite (ACL):
Zhenglin Wang, Jialong Wu, Yilong Lai, Congzhi Zhang, and Deyu Zhou. 2025. SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding. In Proceedings of the 31st International Conference on Computational Linguistics, pages 4920–4937, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding (Wang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.328.pdf