ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning

Millennium Bismay, Xiangjue Dong, James Caverlee


Abstract
This paper presents ReasoningRec, a reasoning-based recommendation framework that leverages Large Language Models (LLMs) to bridge the gap between recommendations and human-interpretable explanations. In contrast to conventional recommendation systems that rely on implicit user-item interactions, ReasoningRec employs LLMs to model users and items, focusing on preferences, aversions, and explanatory reasoning. The framework utilizes a larger LLM to generate synthetic explanations for user preferences, subsequently used to fine-tune a smaller LLM for enhanced recommendation accuracy and human-interpretable explanation. Our experimental study investigates the impact of reasoning and contextual information on personalized recommendations, revealing that the quality of contextual and personalized data significantly influences the LLM’s capacity to generate plausible explanations. Empirical evaluations demonstrate that ReasoningRec surpasses state-of-the-art methods by up to 12.5% in recommendation prediction while concurrently providing human-intelligible explanations.
Anthology ID:
2025.findings-naacl.454
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8132–8148
Language:
URL:
https://aclanthology.org/2025.findings-naacl.454/
DOI:
Bibkey:
Cite (ACL):
Millennium Bismay, Xiangjue Dong, and James Caverlee. 2025. ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 8132–8148, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning (Bismay et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-naacl.454.pdf