Nong Xiao


2025

pdf bib
FedCSR: A Federated Framework for Multi-Platform Cross-Domain Sequential Recommendation with Dual Contrastive Learning
Dongyi Zheng | Hongyu Zhang | Jianyang Zhai | Lin Zhong | Lingzhi Wang | Jiyuan Feng | Xiangke Liao | Yonghong Tian | Nong Xiao | Qing Liao
Proceedings of the 31st International Conference on Computational Linguistics

Cross-domain sequential recommendation (CSR) has garnered significant attention. Current federated frameworks for CSR leverage information across multiple domains but often rely on user alignment, which increases communication costs and privacy risks. In this work, we propose FedCSR, a novel federated cross-domain sequential recommendation framework that eliminates the need for user alignment between platforms. FedCSR fully utilizes cross-domain knowledge to address the key challenges related to data heterogeneity both inter- and intra-platform. To tackle the heterogeneity of data patterns between platforms, we introduce Model Contrastive Learning (MCL) to reduce the gap between local and global models. Additionally, we design Sequence Contrastive Learning (SCL) to address the heterogeneity of user preferences across different domains within a platform by employing tailored sequence augmentation techniques. Extensive experiments conducted on multiple real-world datasets demonstrate that FedCSR achieves superior performance compared to existing baseline methods.

2021

pdf bib
Improving Math Word Problems with Pre-trained Knowledge and Hierarchical Reasoning
Weijiang Yu | Yingpeng Wen | Fudan Zheng | Nong Xiao
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

The recent algorithms for math word problems (MWP) neglect to use outside knowledge not present in the problems. Most of them only capture the word-level relationship and ignore to build hierarchical reasoning like the human being for mining the contextual structure between words and sentences. In this paper, we propose a Reasoning with Pre-trained Knowledge and Hierarchical Structure (RPKHS) network, which contains a pre-trained knowledge encoder and a hierarchical reasoning encoder. Firstly, our pre-trained knowledge encoder aims at reasoning the MWP by using outside knowledge from the pre-trained transformer-based models. Secondly, the hierarchical reasoning encoder is presented for seamlessly integrating the word-level and sentence-level reasoning to bridge the entity and context domain on MWP. Extensive experiments show that our RPKHS significantly outperforms state-of-the-art approaches on two large-scale commonly-used datasets, and boosts performance from 77.4% to 83.9% on Math23K, from 75.5 to 82.2% on Math23K with 5-fold cross-validation and from 83.7% to 89.8% on MAWPS. More extensive ablations are shown to demonstrate the effectiveness and interpretability of our proposed method.