all InfoSec news
Do Backdoors Assist Membership Inference Attacks?. (arXiv:2303.12589v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
When an adversary provides poison samples to a machine learning model,
privacy leakage, such as membership inference attacks that infer whether a
sample was included in the training of the model, becomes effective by moving
the sample to an outlier. However, the attacks can be detected because
inference accuracy deteriorates due to poison samples. In this paper, we
discuss a \textit{backdoor-assisted membership inference attack}, a novel
membership inference attack based on backdoors that return the adversary's
expected output for a …
accuracy adversary attack attacks backdoor backdoors discuss machine machine learning moving novel privacy training