Feb. 27, 2024, 5:11 a.m. | Sajjad Zarifzadeh, Philippe Liu, Reza Shokri

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.03262v2 Announce Type: replace-cross
Abstract: Membership inference attacks (MIA) aim to detect if a particular data point was used in training a machine learning model. Recent strong attacks have high computational costs and inconsistent performance under varying conditions, rendering them unreliable for practical privacy risk assessment. We design a novel, efficient, and robust membership inference attack (RMIA) which accurately differentiates between population data and training data of a model, with minimal computational overhead. We achieve this by a more accurate …

aim arxiv assessment attacks computational computational costs conditions cost cs.cr cs.lg data design detect high low machine machine learning novel performance point power practical privacy privacy privacy risk risk risk assessment stat.ml training under

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior - Penetration Tester

@ Deloitte | Madrid, España

Associate Cyber Incident Responder

@ Highmark Health | PA, Working at Home - Pennsylvania

Senior Insider Threat Analyst

@ IT Concepts Inc. | Woodlawn, Maryland, United States