all InfoSec news
Arbitrary Decisions are a Hidden Cost of Differentially-Private Training. (arXiv:2302.14517v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Mechanisms used in privacy-preserving machine learning often aim to guarantee
differential privacy (DP) during model training. Practical DP-ensuring training
methods use randomization when fitting model parameters to privacy-sensitive
data (e.g., adding Gaussian noise to clipped gradients). We demonstrate that
such randomization incurs predictive multiplicity: for a given input example,
the output predicted by equally-private models depends on the randomness used
in training. Thus, for a given input, the predicted output can vary drastically
if a model is re-trained, even if …
aim cost data differential privacy guarantee hidden input machine machine learning model training noise privacy private randomization randomness sensitive data training