all InfoSec news
Shifted Interpolation for Differential Privacy
March 4, 2024, 5:10 a.m. | Jinho Bok, Weijie Su, Jason M. Altschuler
cs.CR updates on arXiv.org arxiv.org
Abstract: Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning. It is a fundamental question to quantify their privacy leakage, yet tight characterizations remain open even in the foundational setting of convex losses. This paper improves over previous analyses by establishing (and refining) the "privacy amplification by iteration" phenomenon in the unifying framework of $f$-differential privacy--which tightly captures all aspects of the privacy loss and immediately implies tighter privacy accounting …
algorithms arxiv cs.cr cs.lg differential privacy losses machine machine learning math.oc math.st noisy privacy private question refining stat.ml stat.th
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Computer and Forensics Investigator
@ ManTech | 221BQ - Cstmr Site,Springfield,VA
Senior Security Analyst
@ Oracle | United States
Associate Vulnerability Management Specialist
@ Diebold Nixdorf | Hyderabad, Telangana, India