July 28, 2022, 1:20 a.m. | Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) is vulnerable to model poisoning attacks, in which
malicious clients corrupt the global model via sending manipulated model
updates to the server. Existing defenses mainly rely on Byzantine-robust FL
methods, which aim to learn an accurate global model even if some clients are
malicious. However, they can only resist a small number of malicious clients in
practice. It is still an open challenge how to defend against model poisoning
attacks with a large number of malicious clients. …

attacks clients federated learning malicious poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

SOC Cyber Threat Intelligence Expert

@ Amexio | Luxembourg, Luxembourg, Luxembourg

Systems Engineer - SecOps

@ Fortinet | Dubai, Dubai, United Arab Emirates

Ingénieur Cybersécurité Gouvernance des projets AMR H/F

@ ASSYSTEM | Lyon, France

Senior DevSecOps Consultant

@ Computacenter | Birmingham, GB, B37 7YS