SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC

Jinglong Luo, Yehong Zhang, Zhuo Zhang, Jiaqi Zhang, Xin Mu, Hui Wang, Yue Yu, Zenglin Xu


Abstract
With the growing use of Transformer models hosted on cloud platforms to offer inference services, privacy concerns are escalating, especially concerning sensitive data like investment plans and bank account details. Secure Multi-Party Computing (SMPC) emerges as a promising solution to protect the privacy of inference data and model parameters. However, the application of SMPC in Privacy-Preserving Inference (PPI) for Transformer models often leads to considerable slowdowns or declines in performance. This is largely due to the multitude of nonlinear operations in the Transformer architecture, which are not well-suited to SMPC and are difficult to circumvent or optimize effectively. To address this concern, we introduce a comprehensive PPI framework called SecFormer to achieve fast and accurate PPI for Transformer models. We successfully eliminate the high-cost exponential and maximum operations in PPI without sacrificing model performance and develop a suite of efficient SMPC protocols by employing suitable numerical computation methods to boost other complex nonlinear functions in PPI, including GeLU, LayerNorm, and a redesigned Softmax. Our extensive experiments reveal that SecFormer outperforms MPCFormer in performance, showing improvements of 3.4% and 24.7% for BERTBASE and BERTLARGE, respectively. In terms of efficiency, SecFormer is 3.57 and 3.58 times faster than PUMA for BERTBASE and BERTLARGE, demonstrating its effectiveness and speed.
Anthology ID:
2024.findings-acl.790
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13333–13348
Language:
URL:
https://aclanthology.org/2024.findings-acl.790
DOI:
Bibkey:
Cite (ACL):
Jinglong Luo, Yehong Zhang, Zhuo Zhang, Jiaqi Zhang, Xin Mu, Hui Wang, Yue Yu, and Zenglin Xu. 2024. SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC. In Findings of the Association for Computational Linguistics ACL 2024, pages 13333–13348, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC (Luo et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.790.pdf