March 8, 2023, 2:10 a.m. | Yiwei Lu, Gautam Kamth, Yaoliang Yu

cs.CR updates on arXiv.org arxiv.org

Indiscriminate data poisoning attacks aim to decrease a model's test accuracy
by injecting a small amount of corrupted training data. Despite significant
interest, existing attacks remain relatively ineffective against modern machine
learning (ML) architectures. In this work, we introduce the notion of model
poisonability as a technical tool to explore the intrinsic limits of data
poisoning attacks. We derive an easily computable threshold to establish and
quantify a surprising phase transition phenomenon among popular ML models: data
poisoning attacks become …

accuracy aim attacks data data poisoning interest machine machine learning poisoning technical test tool training work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

EY GDS Internship Program - SAP, Cyber, IT Consultant or Finance Talents with German language

@ EY | Wrocław, DS, PL, 50-086

Security Architect - 100% Remote (REF1604S)

@ Citizant | Chantilly, VA, United States

Network Security Engineer - Firewall admin (f/m/d)

@ Deutsche Börse | Prague, CZ

Junior Cyber Solutions Consultant

@ Dionach | Glasgow, Scotland, United Kingdom

Senior Software Engineer (Cryptography), Bitkey

@ Block | New York City, United States