Sept. 12, 2022, 1:20 a.m. | Rishav Chourasia, Jiayuan Ye, Reza Shokri

cs.CR updates on arXiv.org arxiv.org

What is the information leakage of an iterative randomized learning algorithm
about its training data, when the internal state of the algorithm is
\emph{private}? How much is the contribution of each specific training epoch to
the information leakage through the released model? We study this problem for
noisy gradient descent algorithms, and model the \emph{dynamics} of R\'enyi
differential privacy loss throughout the training process. Our analysis traces
a provably \emph{tight} bound on the R\'enyi divergence between the pair of
probability …

differential privacy privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium