all InfoSec news
Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?
Feb. 16, 2024, 5:10 a.m. | Andrew Lowy, Zhuohang Li, Jing Liu, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
cs.CR updates on arXiv.org arxiv.org
Abstract: For small privacy parameter $\epsilon$, $\epsilon$-differential privacy (DP) provides a strong worst-case guarantee that no membership inference attack (MIA) can succeed at determining whether a person's data was used to train a machine learning model. The guarantee of DP is worst-case because: a) it holds even if the attacker already knows the records of all but one person in the data set; and b) it holds uniformly over all data sets. In practical applications, such …
arxiv attack attacks can case cs.ai cs.cr cs.lg data differential privacy guarantee large machine machine learning parameter privacy train
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Intern, Cyber Security Vulnerability Management
@ Grab | Petaling Jaya, Malaysia
Compliance - Global Privacy Office - Associate - Bengaluru
@ Goldman Sachs | Bengaluru, Karnataka, India
Cyber Security Engineer (m/w/d) Operational Technology
@ MAN Energy Solutions | Oberhausen, DE, 46145
Armed Security Officer - Hospital
@ Allied Universal | Sun Valley, CA, United States
Governance, Risk and Compliance Officer (Africa)
@ dLocal | Lagos (Remote)
Junior Cloud DevSecOps Network Engineer
@ Accenture Federal Services | Arlington, VA