April 15, 2022, 1:20 a.m. | Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, Or Zamir

cs.CR updates on arXiv.org arxiv.org

Given the computational cost and technical expertise required to train
machine learning models, users may delegate the task of learning to a service
provider. We show how a malicious learner can plant an undetectable backdoor
into a classifier. On the surface, such a backdoored classifier behaves
normally, but in reality, the learner maintains a mechanism for changing the
classification of any input, with only a slight perturbation. Importantly,
without the appropriate "backdoor key", the mechanism is hidden and cannot be …

backdoors lg machine machine learning machine learning models

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA