April 6, 2023, 1:10 a.m. | Wenxuan Zeng, Meng Li, Wenjie Xiong, Tong Tong, Wenjie Lu, Jin Tan, Runsheng Wang, Ru Huang

cs.CR updates on arXiv.org arxiv.org

Secure multi-party computation (MPC) enables computation directly on
encrypted data and protects both data and model privacy in deep learning
inference. However, existing neural network architectures, including Vision
Transformers (ViTs), are not designed or optimized for MPC and incur
significant latency overhead. We observe Softmax accounts for the major latency
bottleneck due to a high communication complexity, but can be selectively
replaced or linearized without compromising the model accuracy. Hence, in this
paper, we propose an MPC-friendly ViT, dubbed MPCViT, …

accounts attention communication complexity computation data deep learning enable encrypted encrypted data high latency major mpc network neural network party privacy transformers

Sr Cyber Threat Hunt Researcher

@ Peraton | Beltsville, MD, United States

Lead Consultant, Hydrogeologist

@ WSP | Chattanooga, TN, United States

Senior Security Engineer - Netskope/Proofpoint

@ Sainsbury's | London, London, United Kingdom

Senior Technical Analyst-Network Security

@ Computacenter | Bengaluru Bengaluru (Bengaluru, IN, 560025

Senior DevSecOps Engineer - Clearance Required

@ Logistics Management Institute | Remote, United States

Software Test Automation Manager - Cloud Security

@ Tenable | Israel - Office - CS