April 28, 2022, 1:20 a.m. | Shaltiel Eloul, Fran Silavong, Sanket Kamthe, Antonios Georgiadis, Sean J. Moran

cs.CR updates on arXiv.org arxiv.org

Federated learning reduces the risk of information leakage, but remains
vulnerable to attacks. We investigate how several neural network design
decisions can defend against gradients inversion attacks. We show that
overlapping gradients provides numerical resistance to gradient inversion on
the highly vulnerable dense layer. Specifically, we propose to leverage
batching to maximise mixing of gradients by choosing an appropriate loss
function and drawing identical labels. We show that otherwise it is possible to
directly recover all vectors in a mini-batch …

attacks lg privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA