all InfoSec news
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. (arXiv:2207.09209v3 [cs.CR] UPDATED)
July 28, 2022, 1:20 a.m. | Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
cs.CR updates on arXiv.org arxiv.org
Federated learning (FL) is vulnerable to model poisoning attacks, in which
malicious clients corrupt the global model via sending manipulated model
updates to the server. Existing defenses mainly rely on Byzantine-robust FL
methods, which aim to learn an accurate global model even if some clients are
malicious. However, they can only resist a small number of malicious clients in
practice. It is still an open challenge how to defend against model poisoning
attacks with a large number of malicious clients. …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
SOC Cyber Threat Intelligence Expert
@ Amexio | Luxembourg, Luxembourg, Luxembourg
Systems Engineer - SecOps
@ Fortinet | Dubai, Dubai, United Arab Emirates
Ingénieur Cybersécurité Gouvernance des projets AMR H/F
@ ASSYSTEM | Lyon, France
Senior DevSecOps Consultant
@ Computacenter | Birmingham, GB, B37 7YS