Xinyang Yi
2024
Aligning Large Language Models with Recommendation Knowledge
Yuwei Cao
|
Nikhil Mehta
|
Xinyang Yi
|
Raghunandan Hulikal Keshavan
|
Lukasz Heldt
|
Lichan Hong
|
Ed Chi
|
Maheswaran Sathiamoorthy
Findings of the Association for Computational Linguistics: NAACL 2024
Large language models (LLMs) have recently been used as backbones for recommender systems. However, their performance often lags behind conventional methods in standard tasks like retrieval. We attribute this to a mismatch between LLMs’ knowledge and the knowledge crucial for effective recommendations. While LLMs excel at natural language reasoning, they cannot model complex user-item interactions inherent in recommendation tasks. We propose bridging the knowledge gap and equipping LLMs with recommendation-specific knowledge to address this. Operations such as Masked Item Modeling (MIM) and Bayesian Personalized Ranking (BPR) have found success in conventional recommender systems. Inspired by this, we simulate these operations through natural language to generate auxiliary-task data samples that encode item correlations and user preferences. Fine-tuning LLMs on such auxiliary-task data samples and incorporating more informative recommendation-task data samples facilitates the injection of recommendation-specific knowledge into LLMs. Extensive experiments across retrieval, ranking, and rating prediction tasks on LLMs such as FLAN-T5-Base and FLAN-T5-XL show the effectiveness of our technique in domains such as Amazon Toys & Games, Beauty, and Sports & Outdoors. Notably, our method outperforms conventional and LLM-based baselines, including the current SOTA, by significant margins in retrieval, showcasing its potential for enhancing recommendation quality.
Leveraging LLM Reasoning Enhances Personalized Recommender Systems
Alicia Tsai
|
Adam Kraft
|
Long Jin
|
Chenwei Cai
|
Anahita Hosseini
|
Taibai Xu
|
Zemin Zhang
|
Lichan Hong
|
Ed H. Chi
|
Xinyang Yi
Findings of the Association for Computational Linguistics: ACL 2024
Recent advancements have showcased the potential of Large Language Models (LLMs) in executing reasoning tasks, particularly facilitated by Chain-of-Thought (CoT) prompting. While tasks like arithmetic reasoning involve clear, definitive answers and logical chains of thought, the application of LLM reasoning in recommendation systems (RecSys) presents a distinct challenge. RecSys tasks revolve around subjectivity and personalized preferences, an under-explored domain in utilizing LLMs’ reasoning capabilities. Our study explores several aspects to better understand reasoning for RecSys and demonstrate how task quality improves by utilizing LLM reasoning for both zero-shot and fine-tuning settings. Additionally, we propose Rec-SAVER (Recommender Systems Automatic Verification and Evaluation of Reasoning) to automatically assess the quality of LLM reasoning responses without the requirement of curated gold references or human raters. We show that our framework aligns with real human judgment on the coherence and faithfulness of reasoning responses. Overall, our work shows that incorporating reasoning into RecSys can improve personalized tasks, paving the way for further advancements in recommender system methodologies.
Search
Co-authors
- Lichan Hong 2
- Yuwei Cao 1
- Nikhil Mehta 1
- Raghunandan Hulikal Keshavan 1
- Lukasz Heldt 1
- show all...