all InfoSec news
MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training. (arXiv:2311.00919v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
In Member Inference (MI) attacks, the adversary try to determine whether an
instance is used to train a machine learning (ML) model. MI attacks are a major
privacy concern when using private data to train ML models. Most MI attacks in
the literature take advantage of the fact that ML models are trained to fit the
training data well, and thus have very low loss on training instances. Most
defenses against MI attacks therefore try to make the model fit …
adversary attacks data defending fact instance literature machine machine learning major ml models privacy privacy concern private private data train training try