Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards

Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo


Abstract
Training on large amounts of rationales (i.e., CoT Fine-tuning) has been found effective for improving mathematical reasoning of large language models (LLMs). However, acquiring human-authored solutions or augmenting rationales from proprietary models is costly and not scalable. In this paper, we study the problem of whether LLMs could self-improve mathematical reasoning capabilities. To this end, we propose Self-Explore, where the LLM is tasked to explore the first wrong step (i.e., the first pit) within the rationale and use such signals as fine-grained rewards for further improvement. On the GSM8K and MATH test set, Self-Explore achieves 11.57% and 2.89% improvement on average across three LLMs compared to supervised fine-tuning (SFT). Our code is available here]9.
Anthology ID:
2024.findings-emnlp.78
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1444–1466
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.78/
DOI:
10.18653/v1/2024.findings-emnlp.78
Bibkey:
Cite (ACL):
Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, and Minjoon Seo. 2024. Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1444–1466, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards (Hwang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.78.pdf