Learning to Reason via Self-Iterative Process Feedback for Small Language Models

Kaiyuan Chen, Jin Wang, Xuejie Zhang


Abstract
Small language models (SLMs) are more efficient, cost-effective, and customizable than large language models (LLMs), though they often underperform in specific areas like reasoning. Past methods for enhancing SLMs’ reasoning, such as supervised fine-tuning and distillation, often depend on costly external signals, resulting in SLMs being overly confident with limited supervision signals, thus limiting their abilities. Therefore, this study enables SLMs to learn to reason from self-iterative feedback. By combining odds ratio preference optimization (ORPO), we fine-tune and align SLMs using positive and negative signals generated by themselves. Additionally, we introduce process supervision for rewards in preference alignment by sampling-based inference simulation and process reward models. Compared to Supervised Fine-Tuning (SFT), our method improves the performance of Gemma-2B by 12.43 (Acc) on GSM8K and 3.95 (Pass@1) on MBPP. Furthermore, the proposed method also demonstrated superior out-of-domain generalization capabilities on MMLU_Math and HumanEval.
Anthology ID:
2025.coling-main.203
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3027–3042
Language:
URL:
https://aclanthology.org/2025.coling-main.203/
DOI:
Bibkey:
Cite (ACL):
Kaiyuan Chen, Jin Wang, and Xuejie Zhang. 2025. Learning to Reason via Self-Iterative Process Feedback for Small Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 3027–3042, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Learning to Reason via Self-Iterative Process Feedback for Small Language Models (Chen et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.203.pdf