Sept. 15, 2022, 1:20 a.m. | Guy Heller, Ethan Fetaya

cs.CR updates on arXiv.org arxiv.org

Bayesian learning via Stochastic Gradient Langevin Dynamics (SGLD) has been
suggested for differentially private learning. While previous research provides
differential privacy bounds for SGLD at the initial steps of the algorithm or
when close to convergence, the question of what differential privacy guarantees
can be made in between remains unanswered. This interim region is of great
importance, especially for Bayesian neural networks, as it is hard to guarantee
convergence to the posterior. This paper shows that using SGLD might result …

deep learning differential privacy privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Check Team Members / Cyber Consultants / Pen Testers

@ Resillion | Birmingham, United Kingdom

Security Officer Field Training Officer- Full Time (Harrah's LV)

@ Caesars Entertainment | Las Vegas, NV, United States

Cybersecurity Subject Matter Expert (SME)

@ SMS Data Products Group, Inc. | Fort Belvoir, VA, United States

AWS Security Engineer

@ IntelliPro Group Inc. | Palo Alto, CA

Information Security Analyst

@ Freudenberg Group | Alajuela