all InfoSec news
Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
April 18, 2024, 4:11 a.m. | Xinwei Zhang, Zhiqi Bu, Zhiwei Steven Wu, Mingyi Hong
cs.CR updates on arXiv.org arxiv.org
Abstract: Differentially Private Stochastic Gradient Descent with Gradient Clipping (DPSGD-GC) is a powerful tool for training deep learning models using sensitive data, providing both a solid theoretical privacy guarantee and high efficiency. However, using DPSGD-GC to ensure Differential Privacy (DP) comes at the cost of model performance degradation due to DP noise injection and gradient clipping. Existing research has extensively analyzed the theoretical convergence of DPSGD-GC, and has shown that it only converges when using large …
arxiv bias cost cs.cr cs.lg data deep learning differential privacy efficiency error feedback guarantee high performance privacy private sensitive sensitive data solid tool training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Professional Services Resident Consultant / Senior Professional Services Resident Consultant - AMS
@ Zscaler | Bengaluru, India
Head of Security, Risk & Compliance
@ Gedeon Richter Pharma GmbH | Budapest, HU
Unarmed Professional Security Officer - County Hospital
@ Allied Universal | Los Angeles, CA, United States
Senior Software Engineer, Privacy Engineering
@ Block | Seattle, WA, United States
Senior Cyber Security Specialist
@ Avaloq | Bioggio, Switzerland