June 10, 2022, 1:20 a.m. | Alberto Blanco-Justicia, David Sanchez, Josep Domingo-Ferrer, Krishnamurty Muralidhar

cs.CR updates on arXiv.org arxiv.org

We review the use of differential privacy (DP) for privacy protection in
machine learning (ML). We show that, driven by the aim of preserving the
accuracy of the learned models, DP-based ML implementations are so loose that
they do not offer the ex ante privacy guarantees of DP. Instead, what they
deliver is basically noise addition similar to the traditional (and often
criticized) statistical disclosure control approach. Due to the lack of formal
privacy guarantees, the actual level of privacy …

critical differential privacy machine machine learning privacy review

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA