Diverge to Induce Prompting: Multi-Rationale Induction for Zero-Shot Reasoning

Po-Chun Chen, Hen-Hsen Huang, Hsin-Hsi Chen


Abstract
To address the instability of unguided reasoning paths in standard Chain-of-Thought prompting, recent methods guide large language models (LLMs) by first eliciting a single reasoning strategy. However, relying on just one strategy for each question can still limit performance across diverse tasks. We propose Diverge-to-Induce Prompting (DIP), a framework that first prompts an LLM to generate multiple diverse high-level rationales for each question. Each rationale is then elaborated into a detailed, step-by-step draft plan. Finally, these draft plans are induced into a final plan. DIP enhances zero-shot reasoning accuracy without reliance on resource-intensive sampling. Experiments show that DIP outperforms single-strategy prompting, demonstrating the effectiveness of multi-plan induction for prompt-based reasoning.
Anthology ID:
2025.findings-ijcnlp.6
Volume:
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Kentaro Inui, Sakriani Sakti, Haofen Wang, Derek F. Wong, Pushpak Bhattacharyya, Biplab Banerjee, Asif Ekbal, Tanmoy Chakraborty, Dhirendra Pratap Singh
Venue:
Findings
SIG:
Publisher:
The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
Note:
Pages:
102–115
Language:
URL:
https://aclanthology.org/2025.findings-ijcnlp.6/
DOI:
Bibkey:
Cite (ACL):
Po-Chun Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2025. Diverge to Induce Prompting: Multi-Rationale Induction for Zero-Shot Reasoning. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 102–115, Mumbai, India. The Asian Federation of Natural Language Processing and The Association for Computational Linguistics.
Cite (Informal):
Diverge to Induce Prompting: Multi-Rationale Induction for Zero-Shot Reasoning (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-ijcnlp.6.pdf