Nov. 24, 2022, 2:10 a.m. | Daniel Scheliga, Patrick Mäder, Marco Seeland

cs.CR updates on arXiv.org arxiv.org

Gradient inversion attacks on federated learning systems reconstruct client
training data from exchanged gradient information. To defend against such
attacks, a variety of defense mechanisms were proposed. However, they usually
lead to an unacceptable trade-off between privacy and model utility. Recent
observations suggest that dropout could mitigate gradient leakage and improve
model utility if added to neural networks. Unfortunately, this phenomenon has
not been systematically researched yet. In this work, we thoroughly analyze the
effect of dropout on iterative gradient …

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Oracle EBS DevSecOps Developer

@ Accenture Federal Services | Arlington, VA

Information Security GRC Specialist - Risk Program Lead

@ Western Digital | Irvine, CA, United States

Senior Cyber Operations Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States

AI Cybersecurity Architect

@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)