all InfoSec news
Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger. (arXiv:2206.07136v2 [cs.LG] UPDATED)
July 13, 2022, 1:20 a.m. | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
cs.CR updates on arXiv.org arxiv.org
Per-example gradient clipping is a key algorithmic step that enables
practical differential private (DP) training for deep learning models. The
choice of clipping norm $R$, however, is shown to be vital for achieving high
accuracy under DP. We propose an easy-to-use replacement, called AutoClipping,
that eliminates the need to tune $R$ for any DP optimizers, including DP-SGD,
DP-Adam, DP-LAMB and many others. The automatic variants are as private and
computationally efficient as existing DP optimizers, but require no DP-specific
hyperparameters …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Lead Product Security Engineer
@ Baker Hughes | IN-KA-BANGALORE-NEON BUILDING WEST TOWER
Penetration Tester
@ BT Group | Hemel Hempstead: Riverside (R6, Hemel Hempstead, United Kingdom
Cloud and Infrastructure Security Engineer II
@ StubHub | Los Angeles, CA