Dec. 8, 2022, 2:18 a.m. | Zihan Wang, Jason Lee, Qi Lei

cs.CR updates on arXiv.org arxiv.org

Understanding when and how much a model gradient leaks information about the
training sample is an important question in privacy. In this paper, we present
a surprising result: even without training or memorizing the data, we can fully
reconstruct the training samples from a single gradient query at a randomly
chosen parameter value. We prove the identifiability of the training data under
mild conditions: with shallow or deep neural networks and a wide range of
activation functions. We also present …

data training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineering Professional

@ Nokia | India

Cyber Intelligence Exercise Planner

@ Peraton | Fort Gordon, GA, United States

Technical Lead, HR Systems Security

@ Sun Life | Sun Life Wellesley

SecOps Manager *

@ WTW | Thane, Maharashtra, India

Consultant Appels d'Offres Marketing Digital

@ Numberly | Paris, France