Shuaibin Li
2025
Training Language Models to Critique With Multi-agent Feedback
Tian Lan
|
Wenwei Zhang
|
Chengqi Lyu
|
Shuaibin Li
|
Chen Xu
|
Heyan Huang
|
Dahua Lin
|
Xian-Ling Mao
|
Kai Chen
Findings of the Association for Computational Linguistics: EMNLP 2025
Critique ability, a meta-cognitive capability of humans, presents significant challenges for LLMs to improve. While utilizing human annotation can enhance critique ability effectively, most recent works primarily rely on supervised fine-tuning (SFT) using critiques generated by a single LLM like GPT-4, which is more scalable and cost-effective.However, such model-generated critiques often suffer from inherent flaws due to the complexity of critique. Consequently, fine-tuning LLMs on these flawed critiques not only limits performance but also propagates errors into the learned model.To address this issue, we propose MultiCritique, a unified framework that leverages multi-agent feedback to improve critique ability in both the supervised fine-tuning (SFT) and reinforcement learning (RL) stages.In the SFT stage, MultiCritique aggregates high-quality multi-agent critiques through a fine-grained meta-critique mechanism. In the RL stage, preference critiques are constructed and refined by validating their contributions to revisions, thereby enhancing robustness of RL in improving critique ability.Based on MultiCritique, we construct SFT and RL datasets. Extensive experimental results on two benchmarks highlight the key benefits of our dataset, including superior quality, enhanced data efficiency, strong generalization on unseen tasks, and improvements in the general capability of LLMs.Notably, our fine-tuned 7B model significantly surpasses advanced 7B-13B models, approaching advanced 70B LLMs and GPT-4.Resources have been publicly available.
Search
Fix author
Co-authors
- Kai Chen 1
- He-Yan Huang 1
- Tian Lan (兰天) 1
- Dahua Lin 1
- Chengqi Lyu 1
- show all...