LM2: A Simple Society of Language Models Solves Complex Reasoning

Gurusha Juneja, Subhabrata Dutta, Tanmoy Chakraborty


Abstract
Despite demonstrating emergent reasoning abilities, Large Language Models (LLMS) often lose track of complex, multi-step reasoning. Existing studies show that providing guidance via decomposing the original question into multiple subproblems elicits more robustness in LLM reasoning – a decomposer generates the subproblems, and a solver solves each of these subproblems. However, these techniques fail to accommodate coordination between the decomposer and the solver modules (either in a single model or different specialized ones) – the decomposer does not keep track of the ability of the solver to follow the decomposed reasoning. In this paper, we propose LM2 to address these challenges. LM2 modularizes the decomposition, solution, and verification into three different language models. The decomposer module identifies the key concepts necessary to solve the problem and generates step-by-step subquestions according to the reasoning requirement. The solver model generates the solution to the subproblems that are then checked by the verifier module; depending upon the feedback from the verifier, the reasoning context is constructed using the subproblems and the solutions. These models are trained to coordinate using policy learning. Exhaustive experimentation suggests the superiority of LM2 over existing methods on in- and out-domain reasoning problems, outperforming the best baselines by 8.1% on MATH, 7.71% on JEEBench, and 9.7% on MedQA problems (code available at https://github.com/ LCS2-IIITD/Language_Model_Multiplex).
Anthology ID:
2024.emnlp-main.920
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16473–16484
Language:
URL:
https://aclanthology.org/2024.emnlp-main.920
DOI:
Bibkey:
Cite (ACL):
Gurusha Juneja, Subhabrata Dutta, and Tanmoy Chakraborty. 2024. LM2: A Simple Society of Language Models Solves Complex Reasoning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 16473–16484, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
LM2: A Simple Society of Language Models Solves Complex Reasoning (Juneja et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.920.pdf