May 7, 2024, 4:12 a.m. | Ashwinee Panda, Xinyu Tang, Saeed Mahloujifar, Vikash Sehwag, Prateek Mittal

cs.CR updates on arXiv.org arxiv.org

arXiv:2212.04486v3 Announce Type: replace-cross
Abstract: An open problem in differentially private deep learning is hyperparameter optimization (HPO). DP-SGD introduces new hyperparameters and complicates existing ones, forcing researchers to painstakingly tune hyperparameters with hundreds of trials, which in turn makes it impossible to account for the privacy cost of HPO without destroying the utility. We propose an adaptive HPO method that uses cheap trials (in terms of privacy cost and runtime) to estimate optimal hyperparameters and scales them up. We obtain …

account arxiv cost cs.ai cs.cr cs.lg deep learning linear optimization privacy private problem researchers scaling turn

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior - Penetration Tester

@ Deloitte | Madrid, España

Associate Cyber Incident Responder

@ Highmark Health | PA, Working at Home - Pennsylvania

Senior Insider Threat Analyst

@ IT Concepts Inc. | Woodlawn, Maryland, United States