Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets

Peter Devine


Abstract
Training Large Language Models (LLMs) with Reinforcement Learning from AI Feedback (RLAIF) aligns model outputs more closely with human preferences. This involves an evaluator model ranking multiple candidate responses to user prompts. However, the rankings from popular evaluator models such as GPT-4 can be inconsistent.We propose the Repeat Ranking method, in which we evaluate the same responses multiple times and train only on those responses which are consistently ranked. Using 2,714 training prompts in 62 languages, we generated responses from 7 top multilingual LLMs and had GPT-4 rank them five times each. Evaluating on MT-Bench chat benchmarks in six languages, our method outperformed the standard practice of training on all available prompts.Our work highlights the quality versus quantity trade-off in RLAIF dataset generation and offers a stackable strategy for enhancing dataset and thus model quality.
Anthology ID:
2024.mrl-1.5
Volume:
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Jonne Sälevä, Abraham Owodunni
Venue:
MRL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
93–105
Language:
URL:
https://aclanthology.org/2024.mrl-1.5
DOI:
Bibkey:
Cite (ACL):
Peter Devine. 2024. Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets. In Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), pages 93–105, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets (Devine, MRL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.mrl-1.5.pdf