Oct. 26, 2022, 1:23 a.m. | Chuan Guo, Alexandre Sablayrolles, Maziar Sanjabi

cs.CR updates on arXiv.org arxiv.org

Differential privacy (DP) is by far the most widely accepted framework for
mitigating privacy risks in machine learning. However, exactly how small the
privacy parameter $\epsilon$ needs to be to protect against certain privacy
risks in practice is still not well-understood. In this work, we study data
reconstruction attacks for discrete data and analyze it under the framework of
multiple hypothesis testing. We utilize different variants of the celebrated
Fano's inequality to derive upper bounds on the inferential power of …

machine machine learning privacy testing

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Architect - Hardware

@ Intel | IND - Bengaluru

Elastic Consultant

@ Elastic | Spain

OT Cybersecurity Specialist

@ Emerson | Abu Dhabi, United Arab Emirates

Security Operations Program Manager

@ Kaseya | Miami, Florida, United States

Senior Security Operations Engineer

@ Revinate | Vancouver