Oct. 9, 2023, 1:10 a.m. | Shanshan Han, Wenxuan Wu, Baturalp Buyukates, Weizhao Jin, Yuhang Yao, Qifan Zhang, Salman Avestimehr, Chaoyang He

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) systems are vulnerable to malicious clients that
submit poisoned local models to achieve their adversarial goals, such as
preventing the convergence of the global model or inducing the global model to
misclassify some data. Many existing defense mechanisms are impractical in
real-world FL systems, as they require prior knowledge of the number of
malicious clients or rely on re-weighting or modifying submissions. This is
because adversaries typically do not announce their intentions before
attacking, and re-weighting might …

adversarial anomaly detection bad clients convergence data defense detection federated learning global goals knowledge local malicious proof systems vulnerable world

Sr. Staff Security Engineer

@ Databricks | San Francisco, California

Security Engineer

@ Nomi Health | Austin, Texas

Senior Principal Consultant, Security Architecture

@ 6point6 | Manchester, United Kingdom

Cyber Policy Advisor

@ IntelliBridge | McLean, VA, McLean, VA, US

TW Full Stack Software Engineer (Access Control & Intrusion Systems)

@ Bosch Group | Taipei, Taiwan

Cyber Software Engineer

@ Peraton | Annapolis Junction, MD, United States