all InfoSec news
Differentially private training of residual networks with scale normalisation. (arXiv:2203.00324v2 [cs.LG] UPDATED)
May 9, 2022, 1:20 a.m. | Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis
cs.CR updates on arXiv.org arxiv.org
The training of neural networks with Differentially Private Stochastic
Gradient Descent offers formal Differential Privacy guarantees but introduces
accuracy trade-offs. In this work, we propose to alleviate these trade-offs in
residual networks with Group Normalisation through a simple architectural
modification termed ScaleNorm by which an additional normalisation layer is
introduced after the residual block's addition operation. Our method allows us
to further improve on the recently reported state-of-the art on CIFAR-10,
achieving a top-1 accuracy of 82.5% ({\epsilon}=8.0) when trained …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Information Security Manager & ISSO
@ Federal Reserve System | Minneapolis, MN
Forensic Lead
@ Arete | Hyderabad
Lead Security Risk Analyst (GRC)
@ Justworks, Inc. | New York City
Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F
@ Hifield | Sèvres, France