Yonghong Tian


2025

pdf bib
Learning Transition Patterns by Large Language Models for Sequential Recommendation
Jianyang Zhai | Zi-Feng Mai | Dongyi Zheng | Chang-Dong Wang | Xiawu Zheng | Hui Li | Feidiao Yang | Yonghong Tian
Proceedings of the 31st International Conference on Computational Linguistics

Large Language Models (LLMs) have demonstrated powerful performance in sequential recommendation due to their robust language modeling and comprehension capabilities. In such paradigms, the item texts of interaction sequences are formulated as sentences and LLMs are utilized to learn language representations or directly generate target item texts by incorporating instructions. Despite their promise, these methods solely focus on modeling the mapping from sequential texts to target items, neglecting the relationship between the items in an interaction sequence. This results in a failure to learn the transition patterns between items, which reflect the dynamic change in user preferences and are crucial for predicting the next item. To tackle this issue, we propose a novel framework for mapping the sequential item texts to the sequential item IDs, named ST2SI. Specifically, we first introduce multi-query input and item linear projection (ILP) to model the conditional probability distribution of items. Then, we further propose ID alignment to address misalignment between item texts and item IDs by instruction tuning. Finally, we propose efficient ILP tuning to adapt flexibly to different scenarios, requiring only training a linear layer to achieve competitive performance. Extensive experiments on six real-world datasets show our approach outperforms the best baselines by 7.33% in NDCG@10, 4.65% in Recall@10, and 8.42% in MRR.

pdf bib
FedCSR: A Federated Framework for Multi-Platform Cross-Domain Sequential Recommendation with Dual Contrastive Learning
Dongyi Zheng | Hongyu Zhang | Jianyang Zhai | Lin Zhong | Lingzhi Wang | Jiyuan Feng | Xiangke Liao | Yonghong Tian | Nong Xiao | Qing Liao
Proceedings of the 31st International Conference on Computational Linguistics

Cross-domain sequential recommendation (CSR) has garnered significant attention. Current federated frameworks for CSR leverage information across multiple domains but often rely on user alignment, which increases communication costs and privacy risks. In this work, we propose FedCSR, a novel federated cross-domain sequential recommendation framework that eliminates the need for user alignment between platforms. FedCSR fully utilizes cross-domain knowledge to address the key challenges related to data heterogeneity both inter- and intra-platform. To tackle the heterogeneity of data patterns between platforms, we introduce Model Contrastive Learning (MCL) to reduce the gap between local and global models. Additionally, we design Sequence Contrastive Learning (SCL) to address the heterogeneity of user preferences across different domains within a platform by employing tailored sequence augmentation techniques. Extensive experiments conducted on multiple real-world datasets demonstrate that FedCSR achieves superior performance compared to existing baseline methods.