all InfoSec news
On the Privacy Effect of Data Enhancement via the Lens of Memorization
March 21, 2024, 4:11 a.m. | Xiao Li, Qiongxiu Li, Zhanhao Hu, Xiaolin Hu
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine learning poses severe privacy concerns as it has been shown that the learned models can reveal sensitive information about their training data. Many works have investigated the effect of widely adopted data augmentation and adversarial training techniques, termed data enhancement in the paper, on the privacy leakage of machine learning models. Such privacy effects are often measured by membership inference attacks (MIAs), which aim to identify whether a particular example belongs to the training …
adversarial arxiv augmentation can cs.cr cs.cv cs.lg data effect information lens machine machine learning privacy privacy concerns reveal sensitive sensitive information techniques training training data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Senior - Penetration Tester
@ Deloitte | Madrid, España
Associate Cyber Incident Responder
@ Highmark Health | PA, Working at Home - Pennsylvania
Senior Insider Threat Analyst
@ IT Concepts Inc. | Woodlawn, Maryland, United States