all InfoSec news
Large Scale Transfer Learning for Differentially Private Image Classification. (arXiv:2205.02973v1 [cs.LG])
May 9, 2022, 1:20 a.m. | Harsh Mehta, Abhradeep Thakurta, Alexey Kurakin, Ashok Cutkosky
cs.CR updates on arXiv.org arxiv.org
Differential Privacy (DP) provides a formal framework for training machine
learning models with individual example level privacy. Training models with DP
protects the model against leakage of sensitive data in a potentially
adversarial setting. In the field of deep learning, Differentially Private
Stochastic Gradient Descent (DP-SGD) has emerged as a popular private training
algorithm. Private training using DP-SGD protects against leakage by injecting
noise into individual example gradients, such that the trained model weights
become nearly independent of the use …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 5 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 5 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 5 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 5 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cyber Security Architect - SR
@ ERCOT | Taylor, TX
SOC Analyst
@ Wix | Tel Aviv, Israel
Associate Director, SIEM & Detection Engineering(remote)
@ Humana | Remote US
Senior DevSecOps Architect
@ Computacenter | Birmingham, GB, B37 7YS