April 18, 2024, 4:11 a.m. | Xinwei Zhang, Zhiqi Bu, Zhiwei Steven Wu, Mingyi Hong

cs.CR updates on arXiv.org arxiv.org

arXiv:2311.14632v2 Announce Type: replace-cross
Abstract: Differentially Private Stochastic Gradient Descent with Gradient Clipping (DPSGD-GC) is a powerful tool for training deep learning models using sensitive data, providing both a solid theoretical privacy guarantee and high efficiency. However, using DPSGD-GC to ensure Differential Privacy (DP) comes at the cost of model performance degradation due to DP noise injection and gradient clipping. Existing research has extensively analyzed the theoretical convergence of DPSGD-GC, and has shown that it only converges when using large …

arxiv bias cost cs.cr cs.lg data deep learning differential privacy efficiency error feedback guarantee high performance privacy private sensitive sensitive data solid tool training

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC