Oct. 6, 2023, 1:11 a.m. | Alex Bie, Gautam Kamath, Guojun Zhang

cs.CR updates on arXiv.org arxiv.org

We show that the canonical approach for training differentially private GANs
-- updating the discriminator with differentially private stochastic gradient
descent (DPSGD) -- can yield significantly improved results after modifications
to training. Specifically, we propose that existing instantiations of this
approach neglect to consider how adding noise only to discriminator updates
inhibits discriminator training, disrupting the balance between the generator
and discriminator necessary for successful GAN training. We show that a simple
fix -- taking more discriminator steps between generator …

canonical discriminator gans modifications noise private results training updates

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Senior Security Researcher - Linux MacOS EDR (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Sr. Manager, NetSec GTM Programs

@ Palo Alto Networks | Santa Clara, CA, United States

SOC Analyst I

@ Fortress Security Risk Management | Cleveland, OH, United States