Automatic Model Selection with Large Language Models for Reasoning

James Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, Michael Xie


Abstract
Chain-of-Thought (CoT) and Program-Aided Language Models (PAL) represent two distinct reasoning methods, each with its own strengths. CoT employs natural language, offering flexibility and interpretability, while PAL utilizes programming language, yielding more structured and rigorous logic. We introduce a model selection method to combine the best of both worlds by employing a large language model (LLM) to dynamically select between them. Our theoretical analysis underscores the feasibility of this method, which is further corroborated by empirical results. Our proposed method demonstrates significant performance improvements across eight reasoning datasets with Codex, ChatGPT, and GPT-4. Additionally, our method is complementary to self-consistency; when integrated, it can further enhance performance while significantly reducing computation costs. Moreover, we achieve new state-of-the-art results on GSM8K and SVAMP, with respective accuracies of 96.8% and 93.7%.
Anthology ID:
2023.findings-emnlp.55
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
758–783
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.55
DOI:
10.18653/v1/2023.findings-emnlp.55
Bibkey:
Cite (ACL):
James Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, and Michael Xie. 2023. Automatic Model Selection with Large Language Models for Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 758–783, Singapore. Association for Computational Linguistics.
Cite (Informal):
Automatic Model Selection with Large Language Models for Reasoning (Zhao et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.55.pdf