April 2, 2024, 7:11 p.m. | Abdallah Alshantti, Adil Rasheed, Frank Westad

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.00696v1 Announce Type: new
Abstract: Generative models are subject to overfitting and thus may potentially leak sensitive information from the training data. In this work. we investigate the privacy risks that can potentially arise from the use of generative adversarial networks (GANs) for creating tabular synthetic datasets. For the purpose, we analyse the effects of re-identification attacks on synthetic data, i.e., attacks which aim at selecting samples that are predicted to correspond to memorised training samples based on their proximity …

adversarial arxiv attacks can cs.cr cs.lg data datasets gans generative generative adversarial networks generative models identification information leak may networks privacy privacy risks purpose re-identification risks sensitive sensitive information synthetic training training data work

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)