all InfoSec news
Generating Adversarial Attacks in the Latent Space. (arXiv:2304.04386v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks in the input (pixel) space typically incorporate noise
margins such as $L_1$ or $L_{\infty}$-norm to produce imperceptibly perturbed
data that confound deep learning networks. Such noise margins confine the
magnitude of permissible noise. In this work, we propose injecting adversarial
perturbations in the latent (feature) space using a generative adversarial
network, removing the need for margin-based priors. Experiments on MNIST,
CIFAR10, Fashion-MNIST, CIFAR100 and Stanford Dogs datasets support the
effectiveness of the proposed method in generating adversarial attacks …
adversarial adversarial attacks attacks data datasets deep learning dogs fashion generative high input magnitude network networks noise pixel space stanford support work