all InfoSec news
Deep Leakage from Model in Federated Learning. (arXiv:2206.04887v1 [cs.LG])
June 13, 2022, 1:20 a.m. | Zihao Zhao, Mengen Luo, Wenbo Ding
cs.CR updates on arXiv.org arxiv.org
Distributed machine learning has been widely used in recent years to tackle
the large and complex dataset problem. Therewith, the security of distributed
learning has also drawn increasing attentions from both academia and industry.
In this context, federated learning (FL) was developed as a "secure"
distributed learning by maintaining private training data locally and only
public model gradients are communicated between. However, to date, a variety of
gradient leakage attacks have been proposed for this procedure and prove that
it …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 6 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 6 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 6 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 6 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Security Officer Level 1 (L1)
@ NTT DATA | Virginia, United States of America
Alternance - Analyste VOC - Cybersécurité - Île-De-France
@ Sopra Steria | Courbevoie, France
Senior Security Researcher, SIEM
@ Huntress | Remote US or Remote CAN
Cyber Security Engineer Lead
@ ASSYSTEM | Bridgwater, United Kingdom