May 18, 2023, 1:10 a.m. | Nils Lukas, Florian Kerschbaum

cs.CR updates on arXiv.org arxiv.org

Deep image classification models trained on large amounts of web-scraped data
are vulnerable to data poisoning, a mechanism for backdooring models. Even a
few poisoned samples seen during training can entirely undermine the model's
integrity during inference. While it is known that poisoning more samples
enhances an attack's effectiveness and robustness, it is unknown whether
poisoning too many samples weakens an attack by making it more detectable. We
observe a fundamental detectability/robustness trade-off in data poisoning
attacks: Poisoning too few …

attacks backdooring classification data data poisoning integrity large poisoning robustness scraped training vulnerable web

Principal Security Engineer

@ Elsevier | Home based-Georgia

Infrastructure Compliance Engineer

@ NVIDIA | US, CA, Santa Clara

Information Systems Security Engineer (ISSE) / Cybersecurity SME

@ Green Cell Consulting | Twentynine Palms, CA, United States

Sales Security Analyst

@ Everbridge | Bengaluru

Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France

@ Sopra Steria | Courbevoie, France

Third Party Cyber Risk Analyst

@ Chubb | Philippines