Bo Pei
2025
Intent Contrastive Learning Based on Multi-view Augmentation for Sequential Recommendation
Bo Pei
|
Yingzheng Zhu
|
Guangjin Wang
|
Huajuan Duan
|
Wenya Wu
|
Fuyong Xu
|
Yizhao Zhu
|
Peiyu Liu
|
Ran Lu
Proceedings of the 31st International Conference on Computational Linguistics
Sequential recommendation systems play a key role in modern information retrieval. However, existing intent-related work fails to adequately capture long-term dependencies in user behavior, i.e., the influence of early user behavior on current behavior, and also fails to effectively utilize item relevance. To this end, we propose a novel sequential recommendation framework to overcome the above limitations, called ICMA. Specifically, we combine temporal variability with position encoding that has extrapolation properties to encode sequences, thereby expanding the model’s view of user behavior and capturing long-term user dependencies more effectively. Additionally, we design a multi-view data augmentation method, i.e., based on random data augmentation methods (e.g., crop, mask, and reorder), and further introduce insertion and substitution operations to augment the sequence data from different views by utilizing item relevance. Within this framework, clustering is performed to learn intent distributions, and these learned intents are integrated into the sequential recommendation model via contrastive SSL, which maximizes consistency between sequence views and their corresponding intents. The training process alternates between the Expectation (E) step and the Maximization (M) step. Experiments on three real datasets show that our approach improves by 0.8% to 14.7% compared to most baselines.
Search
Fix data
Co-authors
- Huajuan Duan 1
- Peiyu Liu 1
- Ran Lu 1
- Guangjin Wang 1
- Wenya Wu 1
- show all...