Rethinking the Role of Proxy Rewards in Language Model Alignment

Sungdong Kim, Minjoon Seo


Abstract
Learning from human feedback via proxy reward modeling has been studied to align Large Language Models (LLMs) with human values. However, achieving reliable training through that proxy reward model (RM) is not a trivial problem, and its behavior remained as a black-box. In this paper, we study the role of proxy rewards in the LLM alignment via ‘reverse reward engineering’ by composing interpretable features as a white-box reward function. We aim to replicate the ground truth (gold) reward signal by achieving a monotonic relationship between the proxy and gold reward signals after training the model using the proxy reward in reinforcement learning (RL). Our findings indicate that successfully emulating the gold reward requires generating responses that are relevant with enough length to open-ended questions, while also ensuring response consistency in closed-ended questions. Furthermore, resulting models optimizing our devised white-box reward show competitive performances with strong open-source RMs in alignment benchmarks. We highlight its potential usage as a simple but strong reward baseline for the LLM alignment, not requiring explicit human feedback dataset and RM training.
Anthology ID:
2024.emnlp-main.1150
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20656–20674
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1150
DOI:
Bibkey:
Cite (ACL):
Sungdong Kim and Minjoon Seo. 2024. Rethinking the Role of Proxy Rewards in Language Model Alignment. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20656–20674, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Rethinking the Role of Proxy Rewards in Language Model Alignment (Kim & Seo, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1150.pdf