all InfoSec news
FLDetector: Detecting Malicious Clients in Model Poisoning Attacks to Federated Learning. (arXiv:2207.09209v1 [cs.CR])
July 20, 2022, 1:20 a.m. | Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, Neil Zhenqiang Gong
cs.CR updates on arXiv.org arxiv.org
Federated learning (FL) is vulnerable to model poisoning attacks, in which
malicious clients corrupt the global model via sending manipulated model
updates to the server. Existing defenses mainly rely on Byzantine-robust FL
methods, which aim to learn an accurate global model even if some clients are
malicious. However, they can only resist a small number of malicious clients in
practice. It is still an open challenge how to defend against model poisoning
attacks with a large number of malicious clients. …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer, Infrastructure Protection
@ Google | Hyderabad, Telangana, India
Senior Security Software Engineer
@ Microsoft | London, London, United Kingdom
Consultor Ciberseguridad (Cadiz)
@ Capgemini | Cádiz, M, ES
Cyber MS MDR - Sr Associate
@ KPMG India | Bengaluru, Karnataka, India
Privacy Engineer, Google Cloud Privacy
@ Google | Pittsburgh, PA, USA; Raleigh, NC, USA