Aug. 10, 2022, 1:20 a.m. | Daniel Scheliga, Patrick Mäder, Marco Seeland

cs.CR updates on arXiv.org arxiv.org

Exploiting gradient leakage to reconstruct supposedly private training data,
gradient inversion attacks are an ubiquitous threat in collaborative learning
of neural networks. To prevent gradient leakage without suffering from severe
loss in model performance, recent work proposed a PRivacy EnhanCing mODulE
(PRECODE) based on variational modeling as extension for arbitrary model
architectures. In this work, we investigate the effect of PRECODE on gradient
inversion attacks to reveal its underlying working principle. We show that
variational modeling induces stochasticity on PRECODE's …

lg modeling

Information Security Engineers

@ D. E. Shaw Research | New York City

Embedded Penetration Tester - Cyber Security Team [BGSW]

@ Bosch Group | Warszawa, Poland

Staff Cybersecurity Engineer

@ Torc Robotics | Blacksburg, VA; Remote, US

Cybersecurity Engineer

@ Tiro Solutions Group LLC | Downers Grove, Illinois, United States

Director, Network Compliance

@ Marriott International | Bethesda, MD, United States

Cybersecurity Manager

@ Tiro Solutions Group LLC | Downers Grove, Illinois, United States