Feb. 15, 2024, 5:10 a.m. | Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn

cs.CR updates on arXiv.org arxiv.org

arXiv:2306.15056v2 Announce Type: replace-cross
Abstract: Differential privacy (DP) ensures that training a machine learning model does not leak private data. In practice, we may have access to auxiliary public data that is free of privacy concerns. In this work, we assume access to a given amount of public data and settle the following fundamental open questions: 1. What is the optimal (worst-case) error of a DP model trained over a private data set while having access to side public data? …

access arxiv cs.cr cs.lg data differential privacy free leak machine machine learning math.oc may model training practice privacy privacy concerns private private data public stat.ml training work

Security Engineer

@ Celonis | Munich, Germany

Security Engineer, Cloud Threat Intelligence

@ Google | Reston, VA, USA; Kirkland, WA, USA

IT Security Analyst*

@ EDAG Group | Fulda, Hessen, DE, 36037

Scrum Master/ Agile Project Manager for Information Security (Temporary)

@ Guidehouse | Lagunilla de Heredia

Waste Incident Responder (Tanker Driver)

@ Severn Trent | Derby , England, GB

Risk Vulnerability Analyst w/Clearance - Colorado

@ Rothe | Colorado Springs, CO, United States