all InfoSec news
Mitigating Statistical Bias within Differentially Private Synthetic Data. (arXiv:2108.10934v2 [stat.ML] UPDATED)
March 3, 2022, 2:20 a.m. | Sahra Ghalebikesabi, Harrison Wilde, Jack Jewson, Arnaud Doucet, Sebastian Vollmer, Chris Holmes
cs.CR updates on arXiv.org arxiv.org
Increasing interest in privacy-preserving machine learning has led to new and
evolved approaches for generating private synthetic data from undisclosed real
data. However, mechanisms of privacy preservation can significantly reduce the
utility of synthetic data, which in turn impacts downstream tasks such as
learning predictive models or inference. We propose several re-weighting
strategies using privatised likelihood ratios that not only mitigate
statistical bias of downstream estimators but also have general applicability
to differentially private generative models. Through large-scale empirical
evaluation, …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Security Engineer, Incident Response
@ Databricks | Remote - Netherlands
Associate Vulnerability Engineer - Mid-Atlantic region (Part-Time)
@ GuidePoint Security LLC | Remote in VA, MD, PA, NC, DE, NJ, or DC
Data Security Architect
@ Accenture Federal Services | Washington, DC
Identity Security Administrator
@ SailPoint | Pune, India