Web: http://arxiv.org/abs/2209.07403

Sept. 16, 2022, 1:20 a.m. | Andrew Lowy, Meisam Razaviyayn

cs.CR updates on arXiv.org arxiv.org

We study differentially private (DP) stochastic optimization (SO) with data
containing outliers and loss functions that are not Lipschitz continuous. To
date, the vast majority of work on DP SO assumes that the loss is Lipschitz
(i.e. stochastic gradients are uniformly bounded), and their error bounds scale
with the Lipschitz parameter of the loss. While this assumption is convenient,
it is often unrealistic: in many practical problems where privacy is required,
data may contain outliers or be unbounded, causing some …

extension losses non optimization

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Product Security Engineer (SSDL)

@ ServiceNow | Denver, Colorado, United States

Application Security Sales Specialist

@ Dynatrace | Munich, Germany

Threat Intelligence Response Analyst

@ Recorded Future, Inc. | London

IT Security Analyst

@ Docebo | Toronto, Ontario

Software Engineer - Perception and Threat Assessment - XC

@ Bosch Group | Plymouth, MI, United States

Sr. Cyber Incident Response Analyst

@ Experian | Heredia, Costa Rica

Manager, DT GRC (Governance, Risk, And compliance)

@ ServiceNow | Austin, Texas, United States