May 16, 2022, 1:20 a.m. | Wenxuan Bao, Luke A. Bauer, Vincent Bindschaedler

cs.CR updates on arXiv.org arxiv.org

We study a pitfall in the typical workflow for differentially private machine
learning. The use of differentially private learning algorithms in a "drop-in"
fashion -- without accounting for the impact of differential privacy (DP) noise
when choosing what feature engineering operations to use, what features to
select, or what neural network architecture to use -- yields overly complex and
poorly performing models. In other words, by anticipating the impact of DP
noise, a simpler and more accurate alternative model could …

architecture machine machine learning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Solution Architect

@ Civica | London, England, United Kingdom

Information Security Officer (80-100%)

@ SIX Group | Zurich, CH

Cloud Information Systems Security Engineer

@ Analytic Solutions Group | Chantilly, Virginia, United States

SRE Engineer & Security Software Administrator

@ Talan | Mexico City, Spain