all InfoSec news
A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization
May 7, 2024, 4:12 a.m. | Ashwinee Panda, Xinyu Tang, Saeed Mahloujifar, Vikash Sehwag, Prateek Mittal
cs.CR updates on arXiv.org arxiv.org
Abstract: An open problem in differentially private deep learning is hyperparameter optimization (HPO). DP-SGD introduces new hyperparameters and complicates existing ones, forcing researchers to painstakingly tune hyperparameters with hundreds of trials, which in turn makes it impossible to account for the privacy cost of HPO without destroying the utility. We propose an adaptive HPO method that uses cheap trials (in terms of privacy cost and runtime) to estimate optimal hyperparameters and scales them up. We obtain …
account arxiv cost cs.ai cs.cr cs.lg deep learning linear optimization privacy private problem researchers scaling turn
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Senior - Penetration Tester
@ Deloitte | Madrid, España
Associate Cyber Incident Responder
@ Highmark Health | PA, Working at Home - Pennsylvania
Senior Insider Threat Analyst
@ IT Concepts Inc. | Woodlawn, Maryland, United States