all InfoSec news
G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks through Attributed Client Graph Clustering. (arXiv:2306.04984v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
As a collaborative paradigm, Federated Learning (FL) empowers clients to
engage in collective model training without exchanging their respective local
data. Nevertheless, FL remains vulnerable to backdoor attacks in which an
attacker compromises malicious clients, and injects poisoned model weights into
the aggregation process to yield attacker-chosen predictions for particular
samples. Existing countermeasures, mainly based on anomaly detection, may
erroneously reject legitimate weights while accepting malicious ones, which is
due to inadequacies in quantifying client model similarities. Other defense
mechanisms …
aggregation attacks backdoor backdoor attacks client clients clustering data federated learning local malicious model training paradigm process training vulnerable