all InfoSec news
Bayesian Estimation of Differential Privacy. (arXiv:2206.05199v2 [cs.LG] UPDATED)
June 16, 2022, 1:20 a.m. | Santiago Zanella-Béguelin (Microsoft Research), Lukas Wutschitz (Microsoft), Shruti Tople (Microsoft Research), Ahmed Salem (Microsoft Research),
cs.CR updates on arXiv.org arxiv.org
Algorithms such as Differentially Private SGD enable training machine
learning models with formal privacy guarantees. However, there is a discrepancy
between the protection that such algorithms guarantee in theory and the
protection they afford in practice. An emerging strand of work empirically
estimates the protection afforded by differentially private training as a
confidence interval for the privacy budget $\varepsilon$ spent on training a
model. Existing approaches derive confidence intervals for $\varepsilon$ from
confidence intervals for the false positive and false …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 3 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 3 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 3 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 3 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cyber Security Architect - SR
@ ERCOT | Taylor, TX
SOC Analyst
@ Wix | Tel Aviv, Israel
Associate Director, SIEM & Detection Engineering(remote)
@ Humana | Remote US
Senior DevSecOps Architect
@ Computacenter | Birmingham, GB, B37 7YS