Enhancing Large Language Model Based Sequential Recommender Systems with Pseudo Labels Reconstruction

Hyunsoo Na, Minseok Gang, Youngrok Ko, Jinseok Seol, Sang-goo Lee


Abstract
Large language models (LLMs) are utilized in various studies, and they also demonstrate a potential to function independently as a recommendation model. Nevertheless, training sequences and text labels modifies LLMs’ pre-trained weights, diminishing their inherent strength in constructing and comprehending natural language sentences. In this study, we propose a reconstruction-based LLM recommendation model (ReLRec) that harnesses the feature extraction capability of LLMs, while preserving LLMs’ sentence generation abilities. We reconstruct the user and item pseudo-labels generated from user reviews, while training on sequential data, aiming to exploit the key features of both users and items. Experimental results demonstrate the efficacy of label reconstruction in sequential recommendation tasks.
Anthology ID:
2024.findings-emnlp.423
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7213–7222
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.423
DOI:
Bibkey:
Cite (ACL):
Hyunsoo Na, Minseok Gang, Youngrok Ko, Jinseok Seol, and Sang-goo Lee. 2024. Enhancing Large Language Model Based Sequential Recommender Systems with Pseudo Labels Reconstruction. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7213–7222, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Enhancing Large Language Model Based Sequential Recommender Systems with Pseudo Labels Reconstruction (Na et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.423.pdf