Feb. 21, 2024, 5:11 a.m. | Alexander Ziller, Anneliese Riess, Kristian Schwethelm, Tamara T. Mueller, Daniel Rueckert, Georgios Kaissis

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.12861v1 Announce Type: cross
Abstract: Reconstruction attacks on machine learning (ML) models pose a strong risk of leakage of sensitive data. In specific contexts, an adversary can (almost) perfectly reconstruct training data samples from a trained model using the model's gradients. When training ML models with differential privacy (DP), formal upper bounds on the success of such reconstruction attacks can be provided. So far, these bounds have been formulated under worst-case assumptions that might not hold high realistic practicality. In …

adversaries adversary arxiv attack attacks can cs.cr cs.lg data differential privacy machine machine learning ml models privacy risk sensitive sensitive data training training data

Digital Security Infrastructure Manager

@ Wizz Air | Budapest, HU, H-1103

Sr. Solution Consultant

@ Highspot | Sydney

Cyber Security Analyst III

@ Love's Travel Stops | Oklahoma City, OK, US, 73120

Lead Security Engineer

@ JPMorgan Chase & Co. | Tampa, FL, United States

GTI Manager of Cybersecurity Operations

@ Grant Thornton | Tulsa, OK, United States

GCP Incident Response Engineer

@ Publicis Groupe | Dallas, Texas, United States