all InfoSec News
Too Good to be True? Turn Any Model Differentially Private With DP-Weights
July 1, 2024, 4:14 a.m. | David Zagardo
cs.CR updates on arXiv.org arxiv.org
Abstract: Imagine training a machine learning model with Differentially Private Stochastic Gradient Descent (DP-SGD), only to discover post-training that the noise level was either too high, crippling your model's utility, or too low, compromising privacy. The dreaded realization hits: you must start the lengthy training process from scratch. But what if you could avoid this retraining nightmare? In this study, we introduce a groundbreaking approach (to our knowledge) that applies differential privacy noise to the model's …
arxiv cs.ai cs.cr cs.lg discover good high low machine machine learning noise privacy private start too good to be true training turn utility
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Ground Systems Engineer - Evolved Strategic SATCOM (ESS)
@ The Aerospace Corporation | Los Angeles AFB
Policy and Program Analyst
@ Obsidian Solutions Group | Rosslyn, VA, US
Principal Network Engineering
@ CVS Health | Work At Home-California
Lead Software Engineer
@ Rapid7 | NIS Belfast
Software Engineer II - Java
@ Rapid7 | NIS Belfast
Senior Software Engineer
@ Rapid7 | NIS Belfast