Unsupervised Human Preference Learning

Sumuk Shashidhar, Abhinav Chinta, Vaibhav Sahai, Dilek Tur


Abstract
Large language models demonstrate impressive reasoning abilities but struggle to provide personalized content due to their lack of individual user preference information. Existing methods, such as in-context learning and parameter-efficient fine-tuning, fall short in capturing the complexity of human preferences, especially given the small, personal datasets individuals possess. In this paper, we propose a novel approach utilizing small parameter models as preference agents to generate natural language rules that guide a larger, pre-trained model, enabling efficient personalization. Our method involves a small, local “steering wheel” model that directs the outputs of a much larger foundation model, producing content tailored to an individual’s preferences while leveraging the extensive knowledge and capabilities of the large model. Importantly, this personalization is achieved without the need to fine-tune the large model. Experimental results on email and article datasets, demonstrate that our technique significantly outperforms baseline personalization methods. By allowing foundation models to adapt to individual preferences in a data and compute-efficient manner, our approach paves the way for highly personalized language model applications.
Anthology ID:
2024.emnlp-main.200
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3412–3445
Language:
URL:
https://aclanthology.org/2024.emnlp-main.200
DOI:
Bibkey:
Cite (ACL):
Sumuk Shashidhar, Abhinav Chinta, Vaibhav Sahai, and Dilek Tur. 2024. Unsupervised Human Preference Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3412–3445, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Human Preference Learning (Shashidhar et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.200.pdf
Software:
 2024.emnlp-main.200.software.zip
Data:
 2024.emnlp-main.200.data.zip