March 31, 2023, 1:10 a.m. | Franziska Boenisch, Christopher Mühl, Adam Dziedzic, Roy Rinberg, Nicolas Papernot

cs.CR updates on arXiv.org arxiv.org

When training a machine learning model with differential privacy, one sets a
privacy budget. This budget represents a maximal privacy violation that any
user is willing to face by contributing their data to the training set. We
argue that this approach is limited because different users may have different
privacy expectations. Thus, setting a uniform privacy budget across all points
may be overly conservative for some users or, conversely, not sufficiently
protective for others. In this paper, we capture these …

budget budgets capture data differential privacy machine machine learning may privacy privacy violation training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Systems Security Officer (ISSO) (Remote within HR Virginia area)

@ OneZero Solutions | Portsmouth, VA, USA

Security Analyst

@ UNDP | Tripoli (LBY), Libya

Senior Incident Response Consultant

@ Google | United Kingdom

Product Manager II, Threat Intelligence, Google Cloud

@ Google | Austin, TX, USA; Reston, VA, USA

Cloud Security Analyst

@ Cloud Peritus | Bengaluru, India