Xiawu Zheng
2025
Learning Transition Patterns by Large Language Models for Sequential Recommendation
Jianyang Zhai
|
Zi-Feng Mai
|
Dongyi Zheng
|
Chang-Dong Wang
|
Xiawu Zheng
|
Hui Li
|
Feidiao Yang
|
Yonghong Tian
Proceedings of the 31st International Conference on Computational Linguistics
Large Language Models (LLMs) have demonstrated powerful performance in sequential recommendation due to their robust language modeling and comprehension capabilities. In such paradigms, the item texts of interaction sequences are formulated as sentences and LLMs are utilized to learn language representations or directly generate target item texts by incorporating instructions. Despite their promise, these methods solely focus on modeling the mapping from sequential texts to target items, neglecting the relationship between the items in an interaction sequence. This results in a failure to learn the transition patterns between items, which reflect the dynamic change in user preferences and are crucial for predicting the next item. To tackle this issue, we propose a novel framework for mapping the sequential item texts to the sequential item IDs, named ST2SI. Specifically, we first introduce multi-query input and item linear projection (ILP) to model the conditional probability distribution of items. Then, we further propose ID alignment to address misalignment between item texts and item IDs by instruction tuning. Finally, we propose efficient ILP tuning to adapt flexibly to different scenarios, requiring only training a linear layer to achieve competitive performance. Extensive experiments on six real-world datasets show our approach outperforms the best baselines by 7.33% in NDCG@10, 4.65% in Recall@10, and 8.42% in MRR.
Search
Fix data
Co-authors
- Hui Li 1
- Zi-Feng Mai 1
- Yonghong Tian 1
- Chang-Dong Wang 1
- Feidiao Yang 1
- show all...