May 9, 2022, 1:20 a.m. | Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis

cs.CR updates on arXiv.org arxiv.org

The training of neural networks with Differentially Private Stochastic
Gradient Descent offers formal Differential Privacy guarantees but introduces
accuracy trade-offs. In this work, we propose to alleviate these trade-offs in
residual networks with Group Normalisation through a simple architectural
modification termed ScaleNorm by which an additional normalisation layer is
introduced after the residual block's addition operation. Our method allows us
to further improve on the recently reported state-of-the art on CIFAR-10,
achieving a top-1 accuracy of 82.5% ({\epsilon}=8.0) when trained …

lg networks scale training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Security Manager & ISSO

@ Federal Reserve System | Minneapolis, MN

Forensic Lead

@ Arete | Hyderabad

Lead Security Risk Analyst (GRC)

@ Justworks, Inc. | New York City

Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F

@ Hifield | Sèvres, France