Enhancing the Reasoning Capabilities of Small Language Models via Solution Guidance Fine-Tuning

Jing Bi, Yuting Wu, Weiwei Xing, Zhenjie Wei


Abstract
Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks. Advances in prompt engineering and fine-tuning techniques have further enhanced their ability to address complex reasoning challenges. However, these advanced capabilities are often exclusive to models exceeding 100 billion parameters. Although Chain-of-Thought (CoT) fine-tuning methods have been explored for smaller models (under 10 billion parameters), they typically depend on extensive CoT training data, which can introduce inconsistencies and limit effectiveness in low-data settings. To overcome these limitations, this paper introduce a new reasoning strategy Solution Guidance (SG) and a plug-and-play training paradigm Solution-Guidance Fine-Tuning (SGFT) for enhancing the reasoning capabilities of small language models. SG focuses on problem understanding and decomposition at the semantic and logical levels, rather than specific computations, which can effectively improve the SLMs’ generalization and reasoning abilities. With only a small amount of SG training data, SGFT can fine-tune a SLM to produce accurate problem-solving guidances, which can then be flexibly fed to any SLM as prompts, enabling it to generate correct answers directly. Experimental results demonstrate that our method significantly improves the performance of SLMs on various reasoning tasks, enhancing both their practicality and efficiency within resource-constrained environments.
Anthology ID:
2025.coling-main.609
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9074–9084
Language:
URL:
https://aclanthology.org/2025.coling-main.609/
DOI:
Bibkey:
Cite (ACL):
Jing Bi, Yuting Wu, Weiwei Xing, and Zhenjie Wei. 2025. Enhancing the Reasoning Capabilities of Small Language Models via Solution Guidance Fine-Tuning. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9074–9084, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Enhancing the Reasoning Capabilities of Small Language Models via Solution Guidance Fine-Tuning (Bi et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.609.pdf