all InfoSec news
SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks. (arXiv:2309.10607v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
While Federated learning (FL) is attractive for pulling privacy-preserving
distributed training data, the credibility of participating clients and
non-inspectable data pose new security threats, of which poisoning attacks are
particularly rampant and hard to defend without compromising privacy,
performance or other desirable properties of FL. To tackle this problem, we
propose a self-purified FL (SPFL) method that enables benign clients to exploit
trusted historical features of locally purified model to supervise the training
of aggregated model in each iteration. The …
attacks clients data distributed federated learning hard non performance poisoning poisoning attacks privacy problem security security threats threats training training data