Web: http://arxiv.org/abs/2110.05057

Sept. 15, 2022, 1:20 a.m. | Guy Heller, Ethan Fetaya

cs.CR updates on arXiv.org arxiv.org

Bayesian learning via Stochastic Gradient Langevin Dynamics (SGLD) has been
suggested for differentially private learning. While previous research provides
differential privacy bounds for SGLD at the initial steps of the algorithm or
when close to convergence, the question of what differential privacy guarantees
can be made in between remains unanswered. This interim region is of great
importance, especially for Bayesian neural networks, as it is hard to guarantee
convergence to the posterior. This paper shows that using SGLD might result …

deep learning differential privacy privacy

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Senior Professional Services Consultant I

@ Palo Alto Networks | New York City, United States

Senior Consultant, Security Research Services (Security Research Services (Unit 42) - Remote

@ Palo Alto Networks | Santa Clara, CA, United States

Software Architect – Endpoint Security

@ Zscaler | San Jose, CA, United States

Chief Information Security Officer H/F

@ AccorCorpo | Évry-Courcouronnes, France

Director of Security Engineering & Compliance

@ TaxBit | Washington, District of Columbia, United States

Principal, Product Security Architect

@ Western Digital | San Jose, CA, United States

IT Security Lead Consultant

@ Devoteam | Praha 1, Czech republic