May 26, 2023, 1:19 a.m. | Yangsibo Huang, Haotian Jiang, Daogao Liu, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni

cs.CR updates on arXiv.org arxiv.org

In this paper, we study the setting in which data owners train machine
learning models collaboratively under a privacy notion called joint
differential privacy [Kearns et al., 2018]. In this setting, the model trained
for each data owner $j$ uses $j$'s data without privacy consideration and other
owners' data with differential privacy guarantees. This setting was initiated
in [Jain et al., 2021] with a focus on linear regressions. In this paper, we
study this setting for stochastic convex optimization (SCO). …

called data data owner differential privacy machine machine learning machine learning models privacy study train under

More from arxiv.org / cs.CR updates on arXiv.org

Toronto Transit Commission (TTC) - Chief Information Security Officer (CISO)

@ BIPOC Executive Search Inc. | Toronto, Ontario, Canada

Security Analyst, AWS Security Messaging

@ Amazon.com | Seattle, Washington, USA

Mid-Level DevSecOps Engineer

@ Sigma Defense | United States - Remote

Senior Consultant, Cyber - Threat Security Operations (SOC, M365 Defender)

@ NielsenIQ | Chennai, India

Senior Email Security Researcher (Cortex XDR)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Application Security Engineer

@ 10x Banking | India - Remote