April 11, 2023, 1:10 a.m. | Nitish Shukla, Sudipta Banerjee

cs.CR updates on arXiv.org arxiv.org

Adversarial attacks in the input (pixel) space typically incorporate noise
margins such as $L_1$ or $L_{\infty}$-norm to produce imperceptibly perturbed
data that confound deep learning networks. Such noise margins confine the
magnitude of permissible noise. In this work, we propose injecting adversarial
perturbations in the latent (feature) space using a generative adversarial
network, removing the need for margin-based priors. Experiments on MNIST,
CIFAR10, Fashion-MNIST, CIFAR100 and Stanford Dogs datasets support the
effectiveness of the proposed method in generating adversarial attacks …

adversarial adversarial attacks attacks data datasets deep learning dogs fashion generative high input magnitude network networks noise pixel space stanford support work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Advisory Red Consultant

@ Security Risk Advisors | Philadelphia, Pennsylvania, United States

Cyber Business Transformation Change Analyst

@ National Grid | Warwick, GB, CV34 6DA

Cyber Security Analyst

@ Ford Motor Company | Mexico City, MEX, Mexico

Associate Administrator, Cyber Security Governance (Fort Myers)

@ Millennium Physician Group | Fort Myers, FL, United States

Embedded GSOC Lead Operator, Events

@ Sibylline Ltd | Seattle, WA, United States