all InfoSec news
Do Gradient Inversion Attacks Make Federated Learning Unsafe?. (arXiv:2202.06924v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Federated learning (FL) allows the collaborative training of AI models
without needing to share raw data. This capability makes it especially
interesting for healthcare applications where patient and data privacy is of
utmost concern. However, recent works on the inversion of deep neural networks
from model gradients raised concerns about the security of FL in preventing the
leakage of training data. In this work, we show that these attacks presented in
the literature are impractical in FL use-cases where the …
ai models applications attacks data data privacy federated learning healthcare networks neural networks privacy security share training work