April 2, 2024, 7:11 p.m. | Abdallah Alshantti, Adil Rasheed, Frank Westad

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.00696v1 Announce Type: new
Abstract: Generative models are subject to overfitting and thus may potentially leak sensitive information from the training data. In this work. we investigate the privacy risks that can potentially arise from the use of generative adversarial networks (GANs) for creating tabular synthetic datasets. For the purpose, we analyse the effects of re-identification attacks on synthetic data, i.e., attacks which aim at selecting samples that are predicted to correspond to memorised training samples based on their proximity …

adversarial arxiv attacks can cs.cr cs.lg data datasets gans generative generative adversarial networks generative models identification information leak may networks privacy privacy risks purpose re-identification risks sensitive sensitive information synthetic training training data work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cyber Security Culture – Communication and Content Specialist

@ H&M Group | Stockholm, Sweden

Container Hardening, Sr. (Remote | Top Secret)

@ Rackner | San Antonio, TX

GRC and Information Security Analyst

@ Intertek | United States

Information Security Officer

@ Sopra Steria | Bristol, United Kingdom

Casual Area Security Officer South Down Area

@ TSS | County Down, United Kingdom