all InfoSec news
On the Privacy Effect of Data Enhancement via the Lens of Memorization
March 21, 2024, 4:11 a.m. | Xiao Li, Qiongxiu Li, Zhanhao Hu, Xiaolin Hu
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine learning poses severe privacy concerns as it has been shown that the learned models can reveal sensitive information about their training data. Many works have investigated the effect of widely adopted data augmentation and adversarial training techniques, termed data enhancement in the paper, on the privacy leakage of machine learning models. Such privacy effects are often measured by membership inference attacks (MIAs), which aim to identify whether a particular example belongs to the training …
adversarial arxiv augmentation can cs.cr cs.cv cs.lg data effect information lens machine machine learning privacy privacy concerns reveal sensitive sensitive information techniques training training data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information System Security Officer (ISSO)
@ LinQuest | Boulder, Colorado, United States
Project Manager - Security Engineering
@ MongoDB | New York City
Security Continuous Improvement Program Manager (m/f/d)
@ METRO/MAKRO | Düsseldorf, Germany
Senior JavaScript Security Engineer, Tools
@ MongoDB | New York City
Principal Platform Security Architect
@ Microsoft | Redmond, Washington, United States
Staff Cyber Security Engineer (Emerging Platforms)
@ NBCUniversal | Englewood Cliffs, NEW JERSEY, United States