DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling

Shanghaoran Quan


Abstract
The performance of the reward model (RM) is a critical factor in improving the effectiveness of the large language model (LLM) during alignment fine-tuning. There remain two challenges in RM training: 1) training the same RM using various categories of data may cause its generalization performance to suffer from multi-task disturbance, and 2) the human annotation consistency rate is generally only 60% to 75%, causing training data to contain a lot of noise. To tackle these two challenges, we introduced the idea of Mixture-of-Experts (MoE) into the field of RM for the first time. We propose the Double-Layer MoE RM (DMoERM). The outer layer MoE is a sparse model. After classifying an input into task categories, we route it to the corresponding inner layer task-specific model. The inner layer MoE is a dense model. We decompose the specific task into multiple capability dimensions and individually fine-tune a LoRA expert on each one. Their outputs are then synthesized by an MLP to compute the final rewards. To minimize costs, we call a public LLM API to obtain the capability preference labels. The validation on manually labeled datasets confirms that our model attains superior consistency with human preference and outstrips advanced generative approaches. Meanwhile, through BoN sampling and RL experiments, we demonstrate that our model outperforms state-of-the-art ensemble methods of RM and mitigates the overoptimization problem. Our code is available at: https://github.com/quanshr/DMoERM.
Anthology ID:
2024.findings-acl.418
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7006–7028
Language:
URL:
https://aclanthology.org/2024.findings-acl.418
DOI:
Bibkey:
Cite (ACL):
Shanghaoran Quan. 2024. DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling. In Findings of the Association for Computational Linguistics ACL 2024, pages 7006–7028, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling (Quan, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.418.pdf