March 9, 2023, 2:10 a.m. | Xin Yuan, Wei Ni, Ming Ding, Kang Wei, Jun Li, H. Vincent Poor

cs.CR updates on arXiv.org arxiv.org

While preserving the privacy of federated learning (FL), differential privacy
(DP) inevitably degrades the utility (i.e., accuracy) of FL due to model
perturbations caused by DP noise added to model updates. Existing studies have
considered exclusively noise with persistent root-mean-square amplitude and
overlooked an opportunity of adjusting the amplitudes to alleviate the adverse
effects of the noise. This paper presents a new DP perturbation mechanism with
a time-varying noise amplitude to protect the privacy of FL and retain the
capability …

accuracy differential privacy federated learning noise opportunity persistent privacy root square studies updates utility

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Werkstudent (w/m/d) - Cyber Security

@ IONOS | Karlsruhe, Germany

Security Operations Manager

@ BambooHR | Utah | Hybrid

Senior Risk and Compliance Analyst

@ Cricket.com | Hyderabad

Cyber Security Architect

@ Lilium | Munich

Senior Security Analyst

@ BETSOL | Bengaluru, India