Preference-Guided Reflective Sampling for Aligning Language Models

Hai Ye, Hwee Tou Ng


Abstract
Iterative data generation and model re-training can effectively align large language models (LLMs) to human preferences. The process of data sampling is crucial, as it significantly influences the success of policy improvement. Repeated random sampling is a widely used method that independently queries the model multiple times to generate outputs. In this work, we propose a more effective sampling method, named Preference-Guided Reflective Sampling (PRS). Unlike random sampling, PRS employs a tree-based generation framework to enable more efficient sampling. It leverages adaptive self-refinement techniques to better explore the sampling space. By specifying user preferences in natural language, PRS can further optimize response generation according to these preferences. As a result, PRS can align models to diverse user preferences. Our experiments demonstrate that PRS generates higher-quality responses with significantly higher rewards. On AlpacaEval and Arena-Hard, PRS substantially outperforms repeated random sampling in best-of-N sampling. Moreover, PRS shows strong performance when applied in iterative offline RL training.
Anthology ID:
2024.emnlp-main.1206
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21646–21668
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1206
DOI:
Bibkey:
Cite (ACL):
Hai Ye and Hwee Tou Ng. 2024. Preference-Guided Reflective Sampling for Aligning Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21646–21668, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Preference-Guided Reflective Sampling for Aligning Language Models (Ye & Ng, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1206.pdf