May 16, 2022, 1:20 a.m. | Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, Roxana Geambasu

cs.CR updates on arXiv.org arxiv.org

A large body of research has shown that machine learning models are
vulnerable to membership inference (MI) attacks that violate the privacy of the
participants in the training data. Most MI research focuses on the case of a
single standalone model, while production machine-learning platforms often
update models over time, on data that often shifts in distribution, giving the
attacker more information. This paper proposes new attacks that take advantage
of one or more model updates to improve MI. A …

attacks lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA