Sept. 16, 2022, 1:20 a.m. | Andrew Lowy, Meisam Razaviyayn

cs.CR updates on arXiv.org arxiv.org

We study differentially private (DP) stochastic optimization (SO) with data
containing outliers and loss functions that are not Lipschitz continuous. To
date, the vast majority of work on DP SO assumes that the loss is Lipschitz
(i.e. stochastic gradients are uniformly bounded), and their error bounds scale
with the Lipschitz parameter of the loss. While this assumption is convenient,
it is often unrealistic: in many practical problems where privacy is required,
data may contain outliers or be unbounded, causing some …

extension losses non optimization

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Architect - Hardware

@ Intel | IND - Bengaluru

Elastic Consultant

@ Elastic | Spain

OT Cybersecurity Specialist

@ Emerson | Abu Dhabi, United Arab Emirates

Security Operations Program Manager

@ Kaseya | Miami, Florida, United States

Senior Security Operations Engineer

@ Revinate | Vancouver