Minjae Kang
2025
Personalized LLM Decoding via Contrasting Personal Preference
Hyungjune Bu
|
ChanJoo Jung
|
Minjae Kang
|
Jaehyung Kim
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
As large language models (LLMs) are progressively deployed in various real-world applications, personalization of LLMs has become increasingly important. While various approaches to LLM personalization such as prompt-based and training-based methods have been actively explored, the development of effective decoding-time algorithms remains largely overlooked, despite their demonstrated potential. In this paper, we propose Contrasting Personal Preference (CoPe), a novel decoding-time approach applied after performing parameter-efficient fine-tuning (PEFT) on user-specific data. Our core idea is to leverage reward-guided decoding specifically for personalization by maximizing each user’s implicit reward signal. We evaluate CoPe across five open-ended personalized text generation tasks. Our empirical results demonstrate that CoPe achieves strong performance, improving personalization by an average of 10.57% in ROUGE-L without relying on external reward models or additional training procedures.
Riemannian Optimization for LoRA on the Stiefel Manifold
JuneYoung Park
|
Minjae Kang
|
Seongbae Lee
|
Haegang Lee
|
Seongwan Kim
|
Jaeho Lee
Findings of the Association for Computational Linguistics: EMNLP 2025
While powerful, large language models (LLMs) present significant fine-tuning challenges due to their size. Parameter-efficient fine-tuning (PEFT) methods like LoRA provide solutions, yet suffer from critical optimizer inefficiencies; notably basis redundancy in LoRA’s B matrix when using AdamW, which fundamentally limits performance. We address this by optimizing the B matrix on the Stiefel manifold, imposing explicit orthogonality constraints that achieve near-perfect orthogonality and full effective rank. This geometric approach dramatically enhances parameter efficiency and representational capacity. Our Stiefel optimizer consistently outperforms AdamW across benchmarks with both LoRA and DoRA, demonstrating that geometric constraints are the key to unlocking LoRA’s full potential for effective LLM fine-tuning.
Search
Fix author
Co-authors
- Hyungjune Bu 1
- ChanJoo Jung 1
- Jaehyung Kim 1
- Seongwan Kim 1
- Seongbae Lee 1
- show all...