all InfoSec news
Evaluations of Machine Learning Privacy Defenses are Misleading
April 29, 2024, 4:11 a.m. | Michael Aerni, Jie Zhang, Florian Tram\`er
cs.CR updates on arXiv.org arxiv.org
Abstract: Empirical defenses for machine learning privacy forgo the provable guarantees of differential privacy in the hope of achieving higher utility while resisting realistic adversaries. We identify severe pitfalls in existing empirical privacy evaluations (based on membership inference attacks) that result in misleading conclusions. In particular, we show that prior evaluations fail to characterize the privacy leakage of the most vulnerable samples, use weak attacks, and avoid comparisons with practical differential privacy baselines. In 5 case …
adversaries arxiv attacks cs.cr cs.lg defenses differential privacy higher hope identify machine machine learning privacy result utility
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Technical Support Specialist (Cyber Security)
@ Sigma Software | Warsaw, Poland
OT Security Specialist
@ Adani Group | AHMEDABAD, GUJARAT, India
FS-EGRC-Manager-Cloud Security
@ EY | Bengaluru, KA, IN, 560048