all InfoSec news
Exploring the Limits of Indiscriminate Data Poisoning Attacks. (arXiv:2303.03592v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Indiscriminate data poisoning attacks aim to decrease a model's test accuracy
by injecting a small amount of corrupted training data. Despite significant
interest, existing attacks remain relatively ineffective against modern machine
learning (ML) architectures. In this work, we introduce the notion of model
poisonability as a technical tool to explore the intrinsic limits of data
poisoning attacks. We derive an easily computable threshold to establish and
quantify a surprising phase transition phenomenon among popular ML models: data
poisoning attacks become …
accuracy aim attacks data data poisoning interest machine machine learning poisoning technical test tool training work