Jan. 24, 2023, 2:10 a.m. | Ali Hatamizadeh, Hongxu Yin, Pavlo Molchanov, Andriy Myronenko, Wenqi Li, Prerna Dogra, Andrew Feng, Mona G. Flores, Jan Kautz, Daguang Xu, Holger R.

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) allows the collaborative training of AI models
without needing to share raw data. This capability makes it especially
interesting for healthcare applications where patient and data privacy is of
utmost concern. However, recent works on the inversion of deep neural networks
from model gradients raised concerns about the security of FL in preventing the
leakage of training data. In this work, we show that these attacks presented in
the literature are impractical in FL use-cases where the …

ai models applications attacks data data privacy federated learning healthcare networks neural networks privacy security share training work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Physical Security Operations Center - Supervisor

@ Equifax | USA-GA-Alpharetta-JVW3

Network Cybersecurity Engineer - Overland Park, KS Hybrid

@ Black & Veatch | Overland Park, KS, US

Cloud Security Engineer

@ Point72 | United States

Technical Program Manager, Security and Compliance, Cloud Compute

@ Google | New York City, USA; Kirkland, WA, USA

EWT Security | Vulnerability Management Analyst - AM

@ KPMG India | Gurgaon, Haryana, India