June 9, 2023, 1:10 a.m. | Hao Yu, Chuan Ma, Meng Liu, Xinwang Liu, Zhe Liu, Ming Ding

cs.CR updates on arXiv.org arxiv.org

As a collaborative paradigm, Federated Learning (FL) empowers clients to
engage in collective model training without exchanging their respective local
data. Nevertheless, FL remains vulnerable to backdoor attacks in which an
attacker compromises malicious clients, and injects poisoned model weights into
the aggregation process to yield attacker-chosen predictions for particular
samples. Existing countermeasures, mainly based on anomaly detection, may
erroneously reject legitimate weights while accepting malicious ones, which is
due to inadequacies in quantifying client model similarities. Other defense
mechanisms …

aggregation attacks backdoor backdoor attacks client clients clustering data federated learning local malicious model training paradigm process training vulnerable

Enterprise Threat Intel Analyst

@ Resource Management Concepts, Inc. | Quantico, Virginia, United States

IT Security Engineer III

@ Mitsubishi Heavy Industries | Houston, TX, US, 77046

Cyber Intelligence Vice President, Threat Intelligence

@ JPMorgan Chase & Co. | Singapore, Singapore

Assistant Manager, Digital Forensics

@ Interpath Advisory | Manchester, England, United Kingdom

Tier 3 - Forensic Analyst, SME

@ Resource Management Concepts, Inc. | Quantico, Virginia, United States

Incident Response, SME

@ Resource Management Concepts, Inc. | Quantico, Virginia, United States