PToco: Prefix-based Token-level Collaboration Enhances Reasoning for Multi-LLMs

Yuang Bian, Yupian Lin, Jingping Liu, Tong Ruan


Abstract
Collaboration between multiple Large Language Models (LLMs) has attracted significant attention for its potential to mitigate hallucinations and enhance reasoning capabilities. Previous approaches, such as multi-agent debate and decoding-time integration, either rely on highly capable models with strong self-reflection abilities or are limited to models sharing the same tokenizer. To address these limitations, we introduce PToco (Prefix-based Token-level Collaboration), a novel mechanism that enables effective collaboration among less capable LLMs, independent of tokenizer differences. PToco uses a prefix-grouping method to extract consensus among tokens with varying levels of granularity, ensuring coherent and robust token generation across multiple models. Experimental results on a series of reasoning tasks demonstrate that PToco significantly improves performance over individual models. Furthermore, this approach generalizes well across different quantities and sizes of participating models, providing a more flexible and efficient solution for multi-LLM ensembles.
Anthology ID:
2025.coling-main.556
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8326–8335
Language:
URL:
https://aclanthology.org/2025.coling-main.556/
DOI:
Bibkey:
Cite (ACL):
Yuang Bian, Yupian Lin, Jingping Liu, and Tong Ruan. 2025. PToco: Prefix-based Token-level Collaboration Enhances Reasoning for Multi-LLMs. In Proceedings of the 31st International Conference on Computational Linguistics, pages 8326–8335, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
PToco: Prefix-based Token-level Collaboration Enhances Reasoning for Multi-LLMs (Bian et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.556.pdf