Xiaomeng Yang
2024
Learning Personalized Alignment for Evaluating Open-ended Text Generation
Danqing Wang
|
Kevin Yang
|
Hanlin Zhu
|
Xiaomeng Yang
|
Andrew Cohen
|
Lei Li
|
Yuandong Tian
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recent research has increasingly focused on evaluating large language models’ (LLMs) alignment with diverse human values and preferences, particularly for open-ended tasks like story generation. Traditional evaluation metrics rely heavily on lexical similarity with human-written references, often showing poor correlation with human judgments and failing to account for alignment with the diversity of human preferences. To address these challenges, we introduce PerSE, an interpretable evaluation framework designed to assess alignment with specific human preferences. It is tuned to infer specific preferences from an in-context personal profile and evaluate the alignment between the generated content and personal preferences. PerSE enhances interpretability by providing detailed comments and fine-grained scoring, facilitating more personalized content generation. Our 13B LLaMA-2-based PerSE shows a 15.8% increase in Kendall correlation and a 13.7% rise in accuracy with zero-shot reviewers compared to GPT-4. It also outperforms GPT-4 by 46.01% in Kendall correlation on new domains, indicating its transferability
Search
Co-authors
- Danqing Wang 1
- Kevin Yang 1
- Hanlin Zhu 1
- Andrew Cohen 1
- Lei Li 1
- show all...