May 18, 2023, 1:10 a.m. | Nils Lukas, Florian Kerschbaum

cs.CR updates on arXiv.org arxiv.org

Deep image classification models trained on large amounts of web-scraped data
are vulnerable to data poisoning, a mechanism for backdooring models. Even a
few poisoned samples seen during training can entirely undermine the model's
integrity during inference. While it is known that poisoning more samples
enhances an attack's effectiveness and robustness, it is unknown whether
poisoning too many samples weakens an attack by making it more detectable. We
observe a fundamental detectability/robustness trade-off in data poisoning
attacks: Poisoning too few …

attacks backdooring classification data data poisoning integrity large poisoning robustness scraped training vulnerable web

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)