Jinglong Luo


2024

pdf bib
SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC
Jinglong Luo | Yehong Zhang | Zhuo Zhang | Jiaqi Zhang | Xin Mu | Hui Wang | Yue Yu | Zenglin Xu
Findings of the Association for Computational Linguistics ACL 2024

With the growing use of Transformer models hosted on cloud platforms to offer inference services, privacy concerns are escalating, especially concerning sensitive data like investment plans and bank account details. Secure Multi-Party Computing (SMPC) emerges as a promising solution to protect the privacy of inference data and model parameters. However, the application of SMPC in Privacy-Preserving Inference (PPI) for Transformer models often leads to considerable slowdowns or declines in performance. This is largely due to the multitude of nonlinear operations in the Transformer architecture, which are not well-suited to SMPC and are difficult to circumvent or optimize effectively. To address this concern, we introduce a comprehensive PPI framework called SecFormer to achieve fast and accurate PPI for Transformer models. We successfully eliminate the high-cost exponential and maximum operations in PPI without sacrificing model performance and develop a suite of efficient SMPC protocols by employing suitable numerical computation methods to boost other complex nonlinear functions in PPI, including GeLU, LayerNorm, and a redesigned Softmax. Our extensive experiments reveal that SecFormer outperforms MPCFormer in performance, showing improvements of 3.4% and 24.7% for BERTBASE and BERTLARGE, respectively. In terms of efficiency, SecFormer is 3.57 and 3.58 times faster than PUMA for BERTBASE and BERTLARGE, demonstrating its effectiveness and speed.