Yuxian Wang
2025
iTool: Reinforced Fine-Tuning with Dynamic Deficiency Calibration for Advanced Tool Use
Yirong Zeng
|
Xiao Ding
|
Yuxian Wang
|
Weiwen Liu
|
Yutai Hou
|
Wu Ning
|
Xu Huang
|
Duyu Tang
|
Dandan Tu
|
Bing Qin
|
Ting Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Augmenting large language models (LLMs) with external tools is a promising approach to enhance their capabilities, especially for complex tasks. Synthesizing tool-use data through real-world simulations is an effective way to achieve this. However, our investigation reveals that training gains significantly decay as synthetic data increases. The model struggles to benefit from more synthetic data, and it can not equip the model with advanced tool-use capabilities in complex scenarios. Moreover, we discovered that the above limitation usually manifests as a fragment deficiency (i.e., parameter errors) in response. To this end, we propose an iterative reinforced fine-tuning strategy designed to alleviate this limitation. This strategy involves: (1) enhancing the diversity of response for synthetic data through path exploration of Monte Carlo Tree Search. (2) iteratively pinpointing the model’s deficiency by constructing fine-grained preference pairs, and then improving it by preference optimization algorithms for targeted improvement. The experiments show that our method achieves 13.11% better performance than the same-size base model. It achieves an improvement of 6.5% in complex scenarios compared to the baseline, and it also outperforms larger open-source and closed-source models.
Tool Zero: Training Tool-Augmented LLMs via Pure RL from Scratch
Yirong Zeng
|
Xiao Ding
|
Yutai Hou
|
Yuxian Wang
|
Li Du
|
Juyi Dai
|
Qiuyang Ding
|
Duyu Tang
|
Dandan Tu
|
Weiwen Liu
|
Bing Qin
|
Ting Liu
Findings of the Association for Computational Linguistics: EMNLP 2025
Training tool-augmented LLMs has emerged as a promising approach to enhancing language models’ capabilities for complex tasks. The current supervised fine-tuning paradigm relies on constructing extensive domain-specific datasets to train models. However, this approach often struggles to generalize effectively to unfamiliar or intricate tool-use scenarios. Recently, reinforcement learning (RL) paradigm can endow LLMs with superior reasoning and generalization abilities. In this work, we address a key question: Can the pure RL be used to effectively elicit a model’s intrinsic reasoning capabilities and enhance the tool-agnostic generalization? We propose a dynamic generalization-guided reward design for rule-based RL, which progressively shifts rewards from exploratory to exploitative tool-use patterns. Based on this design, we introduce the Tool-Zero series models. These models are trained to enable LLMs to autonomously utilize general tools by directly scaling up RL from Zero models (i.e., base models without post-training). Experimental results demonstrate that our models achieve over 7% performance improvement compared to both SFT and RL-with-SFT models under the same experimental settings. These gains are consistently replicated across cross-dataset and intra-dataset evaluations, validating the effectiveness and robustness of our methods.
Search
Fix author
Co-authors
- Xiao Ding 2
- Yutai Hou 2
- Weiwen Liu 2
- Ting Liu (刘挺) 2
- Bing Qin (秦兵) 2
- show all...