TwT: Thinking without Tokens by Habitual Reasoning Distillation with Multi-Teachers’ Guidance

Jingxian Xu, Mengyu Zhou, Weichang Liu, Hanbing Liu, Shi Han, Dongmei Zhang


Abstract
Large Language Models (LLMs) have made significant strides in problem-solving by incorporating reasoning processes. However, this enhanced reasoning capability results in an increased number of output tokens during inference, leading to higher computational costs. To address this challenge, we propose TwT (Thinking without Tokens), a method that reduces inference-time costs through habitual reasoning distillation with multi-teachers’ guidance, while maintaining high performance. Our approach introduces a Habitual Reasoning Distillation method, which internalizes explicit reasoning into the model’s habitual behavior through a Teacher-Guided compression strategy inspired by human cognition. Additionally, we propose Dual-Criteria Rejection Sampling (DCRS), a technique that generates a high-quality and diverse distillation dataset using multiple teacher models, making our method suitable for unsupervised scenarios. Experimental results demonstrate that TwT effectively reduces inference costs while preserving superior performance, achieving up to a 13.6% improvement in accuracy with fewer output tokens compared to other distillation methods, offering a highly practical solution for efficient LLM deployment.
Anthology ID:
2025.findings-emnlp.894
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16475–16489
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.894/
DOI:
Bibkey:
Cite (ACL):
Jingxian Xu, Mengyu Zhou, Weichang Liu, Hanbing Liu, Shi Han, and Dongmei Zhang. 2025. TwT: Thinking without Tokens by Habitual Reasoning Distillation with Multi-Teachers’ Guidance. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 16475–16489, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TwT: Thinking without Tokens by Habitual Reasoning Distillation with Multi-Teachers’ Guidance (Xu et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.894.pdf
Checklist:
 2025.findings-emnlp.894.checklist.pdf