June 16, 2022, 1:20 a.m. | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

cs.CR updates on arXiv.org arxiv.org

Per-example gradient clipping is a key algorithmic step that enables
practical differential private (DP) training for deep learning models. The
choice of clipping norm $R$, however, is shown to be vital for achieving high
accuracy under DP. We propose an easy-to-use replacement, called AutoClipping,
that eliminates the need to tune $R$ for any DP optimizers, including DP-SGD,
DP-Adam, DP-LAMB and many others. The automatic variants are as private and
computationally efficient as existing DP optimizers, but require no DP-specific
hyperparameters …

deep learning lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Regional Leader, Cyber Crisis Communications

@ Google | United Kingdom

Regional Intelligence Manager, Compliance, Safety and Risk Management

@ Google | London, UK

Senior Analyst, Endpoint Security

@ Scotiabank | Toronto, ON, CA, M1K5L1

Software Engineer, Security/Privacy, Google Cloud

@ Google | Bengaluru, Karnataka, India

Senior Security Engineer

@ Coinbase | Remote - USA