all InfoSec news
Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks. (arXiv:2209.04030v1 [cs.CR])
Sept. 12, 2022, 1:20 a.m. | Chulin Xie, Yunhui Long, Pin-Yu Chen, Bo Li
cs.CR updates on arXiv.org arxiv.org
Federated learning (FL) provides an efficient paradigm to jointly train a
global model leveraging data from distributed users. As the local training data
come from different users who may not be trustworthy, several studies have
shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the
privacy of local users, FL is always trained in a differentially private way
(DPFL). Thus, in this paper, we ask: Can we leverage the innate privacy
property of DPFL to provide certified robustness …
attacks certified differential privacy federated learning poisoning privacy robustness
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
EY GDS Internship Program - SAP, Cyber, IT Consultant or Finance Talents with German language
@ EY | Wrocław, DS, PL, 50-086
Security Architect - 100% Remote (REF1604S)
@ Citizant | Chantilly, VA, United States
Network Security Engineer - Firewall admin (f/m/d)
@ Deutsche Börse | Prague, CZ
Junior Cyber Solutions Consultant
@ Dionach | Glasgow, Scotland, United Kingdom
Senior Software Engineer (Cryptography), Bitkey
@ Block | New York City, United States