all InfoSec news
Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks. (arXiv:2401.16687v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Collaborative learning (CL) is a distributed learning framework that aims to
protect user privacy by allowing users to jointly train a model by sharing
their gradient updates only. However, gradient inversion attacks (GIAs), which
recover users' training data from shared gradients, impose severe privacy
threats to CL. Existing defense methods adopt different techniques, e.g.,
differential privacy, cryptography, and perturbation defenses, to defend
against the GIAs. Nevertheless, all current defense methods suffer from a poor
trade-off between privacy, utility, and efficiency. …
arxiv attacks data defending distributed framework privacy protect recover sharing threats train training training data updates user privacy