April 18, 2024, 4:11 a.m. | Qiang Li, Michal Yemini, Hoi-To Wai

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.10995v1 Announce Type: cross
Abstract: Clipped stochastic gradient descent (SGD) algorithms are among the most popular algorithms for privacy preserving optimization that reduces the leakage of users' identity in model training. This paper studies the convergence properties of these algorithms in a performative prediction setting, where the data distribution may shift due to the deployed prediction model. For example, the latter is caused by strategical users during the training of loan policy for banks. Our contributions are two-fold. First, we …

algorithms amplification arxiv bias convergence cs.cr cs.lg data identity math.oc model training optimization popular prediction privacy privacy preserving studies training

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC