June 12, 2023, 1:10 a.m. | Hua Wang, Sheng Gao, Huanyu Zhang, Weijie J. Su, Milan Shen

cs.CR updates on arXiv.org arxiv.org

Hyperparameter optimization, also known as hyperparameter tuning, is a widely
recognized technique for improving model performance. Regrettably, when
training private ML models, many practitioners often overlook the privacy risks
associated with hyperparameter optimization, which could potentially expose
sensitive information about the underlying dataset. Currently, the sole
existing approach to allow privacy-preserving hyperparameter optimization is to
uniformly and randomly select hyperparameters for a number of runs,
subsequently reporting the best-performing hyperparameter. In contrast, in
non-private settings, practitioners commonly utilize "adaptive" hyperparameter …

framework information ml models optimization performance privacy privacy risks private risks sensitive information training

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Principal Security Researcher (Advanced Threat Prevention)

@ Palo Alto Networks | Santa Clara, CA, United States

EWT Infosec | IAM Technical Security Consultant - Manager

@ KPMG India | Bengaluru, Karnataka, India

Security Engineering Operations Manager

@ Gusto | San Francisco, CA; Denver, CO; Remote

Network Threat Detection Engineer

@ Meta | Denver, CO | Reston, VA | Menlo Park, CA | Washington, DC