Oct. 24, 2022, 1:20 a.m. | Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi

cs.CR updates on arXiv.org arxiv.org

Machine learning models can leak information about the data used to train
them. To mitigate this issue, Differentially Private (DP) variants of
optimization algorithms like Stochastic Gradient Descent (DP-SGD) have been
designed to trade-off utility for privacy in Empirical Risk Minimization (ERM)
problems. In this paper, we propose Differentially Private proximal Coordinate
Descent (DP-CD), a new method to solve composite DP-ERM problems. We derive
utility guarantees through a novel theoretical analysis of inexact coordinate
descent. Our results show that, thanks …

minimization risk

Security Operations Engineer

@ Nokia | India

Machine Learning DevSecOps Engineer

@ Ford Motor Company | Mexico City, MEX, Mexico

Cybersecurity Defense Analyst 2

@ IDEMIA | Casablanca, MA, 20270

Executive, IT Security

@ CIMB | Cambodia

Cloud Security Architect - Microsoft (m/w/d)

@ Bertelsmann | Gütersloh, NW, DE, 33333

Senior Consultant, Cybersecurity - SOC

@ NielsenIQ | Chennai, India