Jingfei Sun
2025
GTA: Supervised-Guided Reinforcement Learning for Text Classification with Large Language Models
Min Zeng
|
Jingfei Sun
|
Xueyou Luo
|
Shiqi Zhang
|
Li Xie
|
Caiquan Liu
|
Xiaoxin Chen
Findings of the Association for Computational Linguistics: EMNLP 2025
In natural language processing (NLP) tasks, pure reinforcement learning fine-tuning methods often suffer from inefficient exploration and slow convergence; while supervised fine-tuning (SFT) methods, although efficient in training, have limited performance ceiling and less solid theoretical foundation compared to reinforcement learning. To address efficiency-capability trade-off, we propose the Guess-Think-Answer (GTA) framework that combines the efficiency of SFT with the capability gains of RL in a unified training paradigm. GTA works by having the model first produce a provisional guess (optimized via cross-entropy loss), then reflect on this guess before generating the final answer, with RL rewards shaping both the final output and the format of the entire GTA structure. This hybrid approach achieves both faster convergence than pure RL and higher performance ceiling than pure SFT. To mitigate gradient conflicts between the two training signals, we employ loss masking and gradient constraints. Empirical results on three text classification benchmarks demonstrate that GTA substantially accelerates convergence while outperforming both standalone SFT and RL baselines.
Search
Fix author
Co-authors
- Xiaoxin Chen (陈晓昕) 1
- Caiquan Liu 1
- Xueyou Luo 1
- Li Xie 1
- Min Zeng 1
- show all...