all InfoSec news
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?. (arXiv:2307.01073v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
We study indiscriminate poisoning for linear learners where an adversary
injects a few crafted examples into the training data with the goal of forcing
the induced model to incur higher test error. Inspired by the observation that
linear learners on some datasets are able to resist the best known attacks even
without any defenses, we further investigate whether datasets can be inherently
robust to indiscriminate poisoning attacks for linear learners. For theoretical
Gaussian distributions, we rigorously characterize the behavior of …
adversary attacks data datasets distributions error higher linear poisoning poisoning attacks study test training training data