all InfoSec news
LearnDefend: Learning to Defend against Targeted Model-Poisoning Attacks on Federated Learning. (arXiv:2305.02022v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Targeted model poisoning attacks pose a significant threat to federated
learning systems. Recent studies show that edge-case targeted attacks, which
target a small fraction of the input space are nearly impossible to counter
using existing fixed defense strategies. In this paper, we strive to design a
learned-defense strategy against such attacks, using a small defense dataset.
The defense dataset can be collected by the central authority of the federated
learning task, and should contain a mix of poisoned and clean …
attacks case counter defense defense strategies design edge federated learning input poisoning space studies systems target targeted attacks threat