Nov. 2, 2022, 1:24 a.m. | Yufei Chen, Chao Shen, Yun Shen, Cong Wang, Yang Zhang

cs.CR updates on arXiv.org arxiv.org

As in-the-wild data are increasingly involved in the training stage, machine
learning applications become more susceptible to data poisoning attacks. Such
attacks typically lead to test-time accuracy degradation or controlled
misprediction. In this paper, we investigate the third type of exploitation of
data poisoning - increasing the risks of privacy leakage of benign training
samples. To this end, we demonstrate a set of data poisoning attacks to amplify
the membership exposure of the targeted class. We first propose a generic …

data data poisoning exposure poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cloud Technical Solutions Engineer, Security

@ Google | Mexico City, CDMX, Mexico

Assoc Eng Equipment Engineering

@ GlobalFoundries | SGP - Woodlands

Staff Security Engineer, Cloud Infrastructure

@ Flexport | Bellevue, WA; San Francisco, CA

Software Engineer III, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Software Engineering Manager II, Infrastructure, Google Cloud Security and Privacy

@ Google | San Francisco, CA, USA; Sunnyvale, CA, USA