Web: http://arxiv.org/abs/2209.05724

Sept. 14, 2022, 1:20 a.m. | Jing Wu, Munawar Hayat, Mingyi Zhou, Mehrtash Harandi

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) provides a promising distributed learning paradigm,
since it seeks to protect users privacy by not sharing their private training
data. Recent research has demonstrated, however, that FL is susceptible to
model inversion attacks, which can reconstruct users' private data by
eavesdropping on shared gradients. Existing defense solutions cannot survive
stronger attacks and exhibit a poor trade-off between privacy and performance.
In this paper, we present a straightforward yet effective defense strategy
based on obfuscating the gradients of …

defense federated learning privacy

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Senior Professional Services Consultant I

@ Palo Alto Networks | New York City, United States

Senior Consultant, Security Research Services (Security Research Services (Unit 42) - Remote

@ Palo Alto Networks | Santa Clara, CA, United States

Software Architect – Endpoint Security

@ Zscaler | San Jose, CA, United States

Chief Information Security Officer H/F

@ AccorCorpo | Évry-Courcouronnes, France

Director of Security Engineering & Compliance

@ TaxBit | Washington, District of Columbia, United States

Principal, Product Security Architect

@ Western Digital | San Jose, CA, United States

IT Security Lead Consultant

@ Devoteam | Praha 1, Czech republic