all InfoSec news
Improving Differentially Private SGD via Randomly Sparsified Gradients. (arXiv:2112.00845v2 [cs.LG] UPDATED)
Nov. 24, 2022, 2:10 a.m. | Junyi Zhu, Matthew B. Blaschko
cs.CR updates on arXiv.org arxiv.org
Differentially private stochastic gradient descent (DP-SGD) has been widely
adopted in deep learning to provide rigorously defined privacy, which requires
gradient clipping to bound the maximum norm of individual gradients and
additive isotropic Gaussian noise. With analysis of the convergence rate of
DP-SGD in a non-convex setting, we reveal that randomly sparsifying gradients
before clipping and noisification adjusts a trade-off between internal
components of the convergence bound and leads to a smaller upper bound when the
noise is dominant. Additionally, …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Lead Product Security Engineer
@ Baker Hughes | IN-KA-BANGALORE-NEON BUILDING WEST TOWER
Penetration Tester
@ BT Group | Hemel Hempstead: Riverside (R6, Hemel Hempstead, United Kingdom
Cloud and Infrastructure Security Engineer II
@ StubHub | Los Angeles, CA