Michael R. Metel
2026
Thinking Long, but Short: Stable Sequential Test-Time Scaling for Large Reasoning Models
Michael R. Metel | Yufei Cui | Boxing Chen | Prasanna Parthasarathi
Findings of the Association for Computational Linguistics: EACL 2026
Michael R. Metel | Yufei Cui | Boxing Chen | Prasanna Parthasarathi
Findings of the Association for Computational Linguistics: EACL 2026
Sequential test-time scaling is a promising training-free method to improve large reasoning model accuracy, but as currently implemented, significant limitations have been observed. Inducing models to think for longer can increase their accuracy, but as the length of reasoning is further extended, it has also been shown to result in accuracy degradation and model instability. This work presents a novel sequential test-time scaling method, Min-Seek, which improves model accuracy significantly over a wide range of induced thoughts, stabilizing the accuracy of sequential scaling, and removing the need for reasoning length fine-tuning. Beyond improving model accuracy over a variety of reasoning tasks, our method is inherently efficient, as only the KV pairs of one additional induced thought are kept in the KV cache during reasoning. With a custom KV cache which stores keys without position embeddings, by dynamically encoding them contiguously before each new generated thought, our method can continue to reason well beyond a model’s maximum context length, and under mild conditions has linear computational complexity.
2024
Draft on the Fly: Adaptive Self-Speculative Decoding using Cosine Similarity
Michael R. Metel | Peng Lu | Boxing Chen | Mehdi Rezagholizadeh | Ivan Kobyzev
Findings of the Association for Computational Linguistics: EMNLP 2024
Michael R. Metel | Peng Lu | Boxing Chen | Mehdi Rezagholizadeh | Ivan Kobyzev
Findings of the Association for Computational Linguistics: EMNLP 2024
We present a simple on the fly method for faster inference of large language models. Unlike other (self-)speculative decoding techniques, our method does not require fine-tuning or black-box optimization to generate a fixed draft model, relying instead on simple rules to generate varying draft models adapted to the input context. We show empirically that our light-weight algorithm is competitive with the current SOTA for self-speculative decoding, while being a truly plug-and-play method.