Enhancing Reranking for Recommendation with LLMs through User Preference Retrieval

Haobo Zhang, Qiannan Zhu, Zhicheng Dou


Abstract
Recently, large language models (LLMs) have shown the potential to enhance recommendations due to their sufficient knowledge and remarkable summarization ability. However, the existing LLM-powered recommendation may create redundant output, which generates irrelevant information about the user’s preferences on candidate items from user behavior sequences. To address the issues, we propose a framework UR4Rec that enhances reranking for recommendation with large language models through user preference retrieval. Specifically, UR4Rec develops a small transformer-based user preference retriever towards candidate items to build the bridge between LLMs and recommendation, which focuses on producing the essential knowledge through LLMs from user behavior sequences to enhance reranking for recommendation. Our experimental results on three real-world public datasets demonstrate the superiority of UR4Rec over existing baseline models.
Anthology ID:
2025.coling-main.45
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
658–671
Language:
URL:
https://aclanthology.org/2025.coling-main.45/
DOI:
Bibkey:
Cite (ACL):
Haobo Zhang, Qiannan Zhu, and Zhicheng Dou. 2025. Enhancing Reranking for Recommendation with LLMs through User Preference Retrieval. In Proceedings of the 31st International Conference on Computational Linguistics, pages 658–671, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Enhancing Reranking for Recommendation with LLMs through User Preference Retrieval (Zhang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.45.pdf