all InfoSec news
One-shot Empirical Privacy Estimation for Federated Learning
April 19, 2024, 4:11 a.m. | Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, H. Brendan McMahan, Vinith M. Suriyakumar
cs.CR updates on arXiv.org arxiv.org
Abstract: Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss in settings where known analytical bounds are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g., knowledge of intermediate model iterates or the training data distribution), are tailored to specific tasks, model architectures, or DP algorithm, and/or require retraining the model many times (typically on the order of …
adversary algorithms arxiv auditing cs.cr cs.lg federated federated learning intermediate knowledge loss measure privacy private settings techniques
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Senior Software Engineer, Security
@ Niantic | Zürich, Switzerland
Consultant expert en sécurité des systèmes industriels (H/F)
@ Devoteam | Levallois-Perret, France
Cybersecurity Analyst
@ Bally's | Providence, Rhode Island, United States
Digital Trust Cyber Defense Executive
@ KPMG India | Gurgaon, Haryana, India
Program Manager - Cybersecurity Assessment Services
@ TestPros | Remote (and DMV), DC