Feb. 13, 2023, 2:18 a.m. | Eugene Bagdasaryan, Vitaly Shmatikov

cs.CR updates on arXiv.org arxiv.org

Commoditization and broad adoption of machine learning (ML) technologies
expose users of these technologies to new security risks. Many models today are
based on neural networks. Training and deploying these models for real-world
applications involves complex hardware and software pipelines applied to
training data from many sources. Models trained on untrusted data are
vulnerable to poisoning attacks that introduce "backdoor" functionality.
Compromising a fraction of the training data requires few resources from the
attacker, but defending against these attacks is …

adoption applications attacks backdoor commoditization data hardware machine machine learning networks neural networks pipelines poisoning risks robustness search security security risks software technologies training untrusted vulnerable world

Corporate Security Specialist - 2nd shift (12pm-8pm)

@ Perrigo Company | Grand Rapids, MI, US, 49503

Lead Engineer, Network Security -Network

@ Singtel | Singapore, Singapore

DevSecOps Engineer

@ Moveworks | Remote, USA

Systems Engineer - Cyber Security

@ Penske | Tampa, FL, United States

(Senior) Security Analyst (m/f/x)

@ REWE International Dienstleistungsgesellschaft m.b.H | Wiener Neudorf, Austria

Tier 3 Analyst- Red Team

@ Resource Management Concepts, Inc. | Quantico, Virginia, United States