Oct. 19, 2022, 2:20 a.m. | Andrew Lowy, Meisam Razaviyayn

cs.CR updates on arXiv.org arxiv.org

We study differentially private (DP) stochastic optimization (SO) with data
containing outliers and loss functions that are (possibly) not Lipschitz
continuous. To date, the vast majority of work on DP SO assumes that the loss
is uniformly Lipschitz over data (i.e. stochastic gradients are uniformly
bounded over all data points). While this assumption is convenient, it is often
unrealistic: in many practical problems, the loss function may not be uniformly
Lipschitz. Even when the loss function is Lipschitz continuous, the …

extension losses non optimization

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineering Professional

@ Nokia | India

Cyber Intelligence Exercise Planner

@ Peraton | Fort Gordon, GA, United States

Technical Lead, HR Systems Security

@ Sun Life | Sun Life Wellesley

SecOps Manager *

@ WTW | Thane, Maharashtra, India

Consultant Appels d'Offres Marketing Digital

@ Numberly | Paris, France