ALaRM: Align Language Models via Hierarchical Rewards Modeling

Yuhang Lai, Siyuan Wang, Shujun Liu, Xuanjing Huang, Zhongyu Wei


Abstract
We introduce ALaRM, the first framework modeling hierarchical rewards in reinforcement learning from human feedback (RLHF), which is designed to enhance the alignment of large language models (LLMs) with human preferences. The framework addresses the limitations of current alignment approaches, which often struggle with the inconsistency and sparsity of human supervision signals, by integrating holistic rewards with aspect-specific rewards. This integration enables more precise and consistent guidance of language models towards desired outcomes, particularly in complex and open text generation tasks. By employing a methodology that filters and combines multiple rewards based on their consistency, the framework provides a reliable mechanism for improving model alignment. We validate our approach through applications in long-form question answering and machine translation tasks, employing gpt-3.5-turbo for pairwise comparisons, and demonstrate improvements over existing baselines. Our work underscores the effectiveness of hierarchical rewards modeling in refining LLM training processes for better human preference alignment. We release our code at https://ALaRM-fdu.github.io.
Anthology ID:
2024.findings-acl.465
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7817–7831
Language:
URL:
https://aclanthology.org/2024.findings-acl.465
DOI:
Bibkey:
Cite (ACL):
Yuhang Lai, Siyuan Wang, Shujun Liu, Xuanjing Huang, and Zhongyu Wei. 2024. ALaRM: Align Language Models via Hierarchical Rewards Modeling. In Findings of the Association for Computational Linguistics ACL 2024, pages 7817–7831, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
ALaRM: Align Language Models via Hierarchical Rewards Modeling (Lai et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.465.pdf