May 2, 2022, 1:20 a.m. | Dominik Hintersdorf, Lukas Struppek, Kristian Kersting

cs.CR updates on arXiv.org arxiv.org

Membership inference attacks (MIAs) aim to determine whether a specific
sample was used to train a predictive model. Knowing this may indeed lead to a
privacy breach. Most MIAs, however, make use of the model's prediction scores -
the probability of each output given some input - following the intuition that
the trained model tends to behave differently on its training data. We argue
that this is a fallacy for many modern deep network architectures.
Consequently, MIAs will miserably fail …

attacks lg prediction trust

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cyber Security Architect - SR

@ ERCOT | Taylor, TX

SOC Analyst

@ Wix | Tel Aviv, Israel

Associate Director, SIEM & Detection Engineering(remote)

@ Humana | Remote US

Senior DevSecOps Architect

@ Computacenter | Birmingham, GB, B37 7YS