June 24, 2022, 1:20 a.m. | Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten

cs.CR updates on arXiv.org arxiv.org

Differential privacy is widely accepted as the de facto method for preventing
data leakage in ML, and conventional wisdom suggests that it offers strong
protection against privacy attacks. However, existing semantic guarantees for
DP focus on membership inference, which may overestimate the adversary's
capabilities and is not applicable when membership status itself is
non-sensitive. In this paper, we derive the first semantic guarantees for DP
mechanisms against training data reconstruction attacks under a formal threat
model. We show that two …

data lg training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Advisory Red Consultant

@ Security Risk Advisors | Philadelphia, Pennsylvania, United States

Cyber Business Transformation Change Analyst

@ National Grid | Warwick, GB, CV34 6DA

Cyber Security Analyst

@ Ford Motor Company | Mexico City, MEX, Mexico

Associate Administrator, Cyber Security Governance (Fort Myers)

@ Millennium Physician Group | Fort Myers, FL, United States

Embedded GSOC Lead Operator, Events

@ Sibylline Ltd | Seattle, WA, United States