May 19, 2023, 1:10 a.m. | Georgi Ganev, Kai Xu, Emiliano De Cristofaro

cs.CR updates on arXiv.org arxiv.org

Generative models trained with Differential Privacy (DP) are increasingly
used to produce synthetic data while reducing privacy risks. Navigating their
specific privacy-utility tradeoffs makes it challenging to determine which
models would work best for specific settings/tasks. In this paper, we fill this
gap in the context of tabular data by analyzing how DP generative models
distribute privacy budgets across rows and columns, arguably the main source of
utility degradation. We examine the main factors contributing to how privacy
budgets are …

budget context data differential privacy gap generative privacy privacy risks private risks settings synthetic synthetic data understanding utility work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Engineer (SPLUNK) | Remote US

@ Coalfire | United States

Cyber - AppSec - Web PT2

@ KPMG India | Bengaluru, Karnataka, India

Ingénieur consultant expérimenté en Risques Industriels - Etude de dangers, QRA (F-H-X)

@ Bureau Veritas Group | COURBEVOIE, Ile-de-France, FR

Malware Intern

@ SentinelOne | Bengaluru, Karnataka, India