Oct. 24, 2022, 1:20 a.m. | Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi

cs.CR updates on arXiv.org arxiv.org

In this paper, we study differentially private empirical risk minimization
(DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces
polynomially as the dimension increases. This is a major obstacle to privately
learning large machine learning models. In high dimension, it is common for
some model's parameters to carry more information than others. To exploit this,
we propose a differentially private greedy coordinate descent (DP-GCD)
algorithm. At each iteration, DP-GCD privately performs a coordinate-wise
gradient step along the …

greedy minimization risk

Lead Security Specialist

@ Fujifilm | Holly Springs, NC, United States

Security Operations Centre Analyst

@ Deliveroo | Hyderabad, India (Main Office)

CISOC Analyst

@ KCB Group | Kenya

Lead Security Engineer – Red Team/Offensive Security

@ FICO | Work from Home, United States

Cloud Security SME

@ Maveris | Washington, District of Columbia, United States - Remote

SOC Analyst (m/w/d)

@ Bausparkasse Schwäbisch Hall | Schwäbisch Hall, DE