Nov. 15, 2022, 2:20 a.m. | Christopher A. Choquette-Choo, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta

cs.CR updates on arXiv.org arxiv.org

We introduce new differentially private (DP) mechanisms for gradient-based
machine learning (ML) training involving multiple passes (epochs) of a dataset,
substantially improving the achievable privacy-utility-computation tradeoffs.
Our key contribution is an extension of the online matrix factorization DP
mechanism to multiple participations, substantially generalizing the approach
of DMRST2022. We first give conditions under which it is possible to reduce the
problem with per-iteration vector contributions to the simpler one of scalar
contributions. Using this, we formulate the construction of optimal …

machine machine learning matrix

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Systems Security Officer (ISSO), Junior

@ Dark Wolf Solutions | Remote / Dark Wolf Locations

Cloud Security Engineer

@ ManTech | REMT - Remote Worker Location

SAP Security & GRC Consultant

@ NTT DATA | HYDERABAD, TG, IN

Security Engineer 2 - Adversary Simulation Operations

@ Datadog | New York City, USA