Safer-Instruct: Aligning Language Models with Automated Preference Data

Taiwei Shi, Kai Chen, Jieyu Zhao


Abstract
Reinforcement learning from human feedback (RLHF) is a vital strategy for enhancing model capability in language models. However, annotating preference data for RLHF is a resource-intensive and creativity-demanding process, while existing automatic generation methods face limitations in data diversity and quality. In response, we present Safer-Instruct, a novel pipeline for automatically constructing large-scale preference data. Our approach leverages reversed instruction tuning, instruction induction, and expert model evaluation to efficiently generate high-quality preference data without human annotators. To verify the effectiveness of Safer-Instruct, we apply the pipeline to construct a safety preference dataset as a case study. Finetuning an Alpaca model on this synthetic dataset not only demonstrates improved harmlessness but also outperforms models fine-tuned on human-annotated safety preference data, all the while maintaining a competitive edge in downstream tasks. Importantly, our Safer-Instruct framework is versatile and can be applied to generate preference data across various domains, extending its utility beyond safety preferences. It addresses the challenges in preference data acquisition and advances the development of more capable and responsible AI systems. For dataset and code implementation, see https://github.com/uscnlp-lime/safer-instruct/.
Anthology ID:
2024.naacl-long.422
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7629–7644
Language:
URL:
https://aclanthology.org/2024.naacl-long.422
DOI:
Bibkey:
Cite (ACL):
Taiwei Shi, Kai Chen, and Jieyu Zhao. 2024. Safer-Instruct: Aligning Language Models with Automated Preference Data. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 7629–7644, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Safer-Instruct: Aligning Language Models with Automated Preference Data (Shi et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.422.pdf
Copyright:
 2024.naacl-long.422.copyright.pdf