Web: http://arxiv.org/abs/2111.09679

Sept. 14, 2022, 1:20 a.m. | Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, Reza Shokri

cs.CR updates on arXiv.org arxiv.org

How much does a machine learning algorithm leak about its training data, and
why? Membership inference attacks are used as an auditing tool to quantify this
leakage. In this paper, we present a comprehensive \textit{hypothesis testing
framework} that enables us not only to formally express the prior work in a
consistent way, but also to design new membership inference attacks that use
reference models to achieve a significantly higher power (true positive rate)
for any (false positive rate) error. More …

attacks machine machine learning machine learning models

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Manager, DT GRC (Governance, Risk, And compliance)

@ ServiceNow | Austin, Texas, United States

Associate Threat Intelligence Response Analyst

@ Recorded Future, Inc. | London, UK

Security Engineer - Product Security

@ Riot Games, Inc. | Los Angeles, USA

Senior DevSecOps Engineer - HYBRID

@ Sigma Defense | San Diego, California, United States

Senior Cloud Security Engineer (f/m/d)

@ ecosio | Vienna, Austria

Information Systems Security Manger (ISSM)

@ Scientific Systems Company, Inc. | Woburn, Massachusetts, United States

Cyber Assurance Manager

@ Tesco Bengaluru | Bengaluru, India