all InfoSec news
Unlocking High-Accuracy Differentially Private Image Classification through Scale. (arXiv:2204.13650v1 [cs.LG])
April 29, 2022, 1:20 a.m. | Soham De, Leonard Berrada, Jamie Hayes, Samuel L. Smith, Borja Balle
cs.CR updates on arXiv.org arxiv.org
Differential Privacy (DP) provides a formal privacy guarantee preventing
adversaries with access to a machine learning model from extracting information
about individual training points. Differentially Private Stochastic Gradient
Descent (DP-SGD), the most popular DP training method, realizes this protection
by injecting noise during training. However previous works have found that
DP-SGD often leads to a significant degradation in performance on standard
image classification benchmarks. Furthermore, some authors have postulated that
DP-SGD inherently performs poorly on large models, since the norm …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Azure Security Architect
@ First Quality | Remote US - Eastern or Central Timezone
Lead Security Analyst
@ OpenText | Virtual, CA
Cybersecurity Research Engineer
@ Peraton | Silver Spring, MD, United States
Enterprise Security Engineer
@ Salesforce | California - San Francisco