July 21, 2022, 1:20 a.m. | Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, Neil Zhenqiang Gong

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) is vulnerable to model poisoning attacks, in which
malicious clients corrupt the global model via sending manipulated model
updates to the server. Existing defenses mainly rely on Byzantine-robust FL
methods, which aim to learn an accurate global model even if some clients are
malicious. However, they can only resist a small number of malicious clients in
practice. It is still an open challenge how to defend against model poisoning
attacks with a large number of malicious clients. …

attacks clients federated learning malicious poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA