all InfoSec news
Differentially Private Data Generation Needs Better Features. (arXiv:2205.12900v2 [stat.ML] UPDATED)
Oct. 25, 2022, 1:20 a.m. | Fredrik Harder, Milad Jalali Asadabadi, Danica J. Sutherland, Mijung Park
cs.CR updates on arXiv.org arxiv.org
Training even moderately-sized generative models with differentially-private
stochastic gradient descent (DP-SGD) is difficult: the required level of noise
for reasonable levels of privacy is simply too large. We advocate instead
building off a good, relevant representation on an informative public dataset,
then learning to model the private data with that representation. In
particular, we minimize the maximum mean discrepancy (MMD) between private
target data and a generator's distribution, using a kernel based on perceptual
features learned from a public dataset. …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Physical Security Operations Center - Supervisor
@ Equifax | USA-GA-Alpharetta-JVW3
Network Cybersecurity Engineer - Overland Park, KS Hybrid
@ Black & Veatch | Overland Park, KS, US
Cloud Security Engineer
@ Point72 | United States
Technical Program Manager, Security and Compliance, Cloud Compute
@ Google | New York City, USA; Kirkland, WA, USA
EWT Security | Vulnerability Management Analyst - AM
@ KPMG India | Gurgaon, Haryana, India