March 1, 2023, 2:10 a.m. | Bogdan Kulynych, Hsiang Hsu, Carmela Troncoso, Flavio P. Calmon

cs.CR updates on arXiv.org arxiv.org

Mechanisms used in privacy-preserving machine learning often aim to guarantee
differential privacy (DP) during model training. Practical DP-ensuring training
methods use randomization when fitting model parameters to privacy-sensitive
data (e.g., adding Gaussian noise to clipped gradients). We demonstrate that
such randomization incurs predictive multiplicity: for a given input example,
the output predicted by equally-private models depends on the randomness used
in training. Thus, for a given input, the predicted output can vary drastically
if a model is re-trained, even if …

aim cost data differential privacy guarantee hidden input machine machine learning model training noise privacy private randomization randomness sensitive data training

Financial Crimes Compliance - Senior - Consulting - Location Open

@ EY | New York City, US, 10001-8604

Software Engineer - Cloud Security

@ Neo4j | Malmö

Security Consultant

@ LRQA | Singapore, Singapore, SG, 119963

Identity Governance Consultant

@ Allianz | Sydney, NSW, AU, 2000

Educator, Cybersecurity

@ Brain Station | Toronto

Principal Security Engineer

@ Hippocratic AI | Palo Alto