Dec. 7, 2022, 2:10 a.m. | Marissa Connor, Vincent Emanuele

cs.CR updates on arXiv.org arxiv.org

Semi-supervised learning methods can train high-accuracy machine learning
models with a fraction of the labeled training samples required for traditional
supervised learning. Such methods do not typically involve close review of the
unlabeled training samples, making them tempting targets for data poisoning
attacks. In this paper we investigate the vulnerabilities of semi-supervised
learning methods to backdoor data poisoning attacks on the unlabeled samples.
We show that simple poisoning attacks that influence the distribution of the
poisoned samples' predicted labels are …

attacks backdoor data data poisoning poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Systems Security Officer (ISSO) (Remote within HR Virginia area)

@ OneZero Solutions | Portsmouth, VA, USA

Security Analyst

@ UNDP | Tripoli (LBY), Libya

Senior Incident Response Consultant

@ Google | United Kingdom

Product Manager II, Threat Intelligence, Google Cloud

@ Google | Austin, TX, USA; Reston, VA, USA

Cloud Security Analyst

@ Cloud Peritus | Bengaluru, India