June 29, 2023, 1:10 a.m. | Tyler LeBlond, Joseph Munoz, Fred Lu, Maya Fuchs, Elliott Zaresky-Williams, Edward Raff, Brian Testa

cs.CR updates on arXiv.org arxiv.org

Differential privacy (DP) is the prevailing technique for protecting user
data in machine learning models. However, deficits to this framework include a
lack of clarity for selecting the privacy budget $\epsilon$ and a lack of
quantification for the privacy leakage for a particular data row by a
particular trained model. We make progress toward these limitations and a new
perspective by which to visualize DP results by studying a privacy metric that
quantifies the extent to which a model trained …

budget data differential privacy framework machine machine learning machine learning models ml models privacy profile protecting quantification transition user data

Digital Security Infrastructure Manager

@ Wizz Air | Budapest, HU, H-1103

Sr. Solution Consultant

@ Highspot | Sydney

Cyber Security Analyst III

@ Love's Travel Stops | Oklahoma City, OK, US, 73120

Lead Security Engineer

@ JPMorgan Chase & Co. | Tampa, FL, United States

GTI Manager of Cybersecurity Operations

@ Grant Thornton | Tulsa, OK, United States

GCP Incident Response Engineer

@ Publicis Groupe | Dallas, Texas, United States