Feb. 15, 2024, 5:10 a.m. | Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher Choquette-Choo, Milad Nasr

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.09403v1 Announce Type: new
Abstract: Differential privacy (DP) offers a theoretical upper bound on the potential privacy leakage of analgorithm, while empirical auditing establishes a practical lower bound. Auditing techniques exist forDP training algorithms. However machine learning can also be made private at inference. We propose thefirst framework for auditing private prediction where we instantiate adversaries with varying poisoningand query capabilities. This enables us to study the privacy leakage of four private prediction algorithms:PATE [Papernot et al., 2016], CaPC [Choquette-Choo …

adversaries algorithms arxiv auditing can cs.cr differential privacy framework machine machine learning prediction privacy private techniques training

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)