March 27, 2023, 1:10 a.m. | Mengxin Zheng, Qian Lou, Lei Jiang

cs.CR updates on arXiv.org arxiv.org

It is increasingly important to enable privacy-preserving inference for cloud
services based on Transformers. Post-quantum cryptographic techniques, e.g.,
fully homomorphic encryption (FHE), and multi-party computation (MPC), are
popular methods to support private Transformer inference. However, existing
works still suffer from prohibitively computational and communicational
overhead. In this work, we present, Primer, to enable a fast and accurate
Transformer over encrypted data for natural language processing tasks. In
particular, Primer is constructed by a hybrid cryptographic protocol optimized
for attention-based Transformer …

attention cloud cloud services computation computational data enable encrypted encrypted data encryption fast fhe fully homomorphic encryption homomorphic encryption hybrid important language merge mpc natural language natural language processing party popular post-quantum privacy private protocol quantum services support techniques tokens transformers work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Compliance Architect - Experian Health (Can be REMOTE from anywhere in the US)

@ Experian | ., ., United States

IT Security Specialist

@ Ørsted | Kuala Lumpur, MY

Senior, Cyber Security Analyst

@ Peloton | New York City

Cyber Security Engineer | Perimeter | Firewall

@ Garmin Cluj | Cluj-Napoca, Cluj County, Romania

Pentester / Ethical Hacker Web/API - Vast/Freelance

@ Resillion | Brussels, Belgium