Nov. 5, 2023, 6:10 a.m. | Jiacheng Li, Ninghui Li, Bruno Ribeiro

cs.CR updates on arXiv.org arxiv.org

In Member Inference (MI) attacks, the adversary try to determine whether an
instance is used to train a machine learning (ML) model. MI attacks are a major
privacy concern when using private data to train ML models. Most MI attacks in
the literature take advantage of the fact that ML models are trained to fit the
training data well, and thus have very low loss on training instances. Most
defenses against MI attacks therefore try to make the model fit …

adversary attacks data defending fact instance literature machine machine learning major ml models privacy privacy concern private private data train training try

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens

Cryptography Software Developer

@ Intel | USA - AZ - Chandler

Lead Consultant, Geology

@ WSP | Richmond, VA, United States

BISO Cybersecurity Director

@ ABM Industries | Alpharetta, GA, United States