May 23, 2022, 1:20 a.m. | Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, Tianle Chen

cs.CR updates on arXiv.org arxiv.org

The vulnerability of deep neural networks to adversarial attacks has been
widely demonstrated (e.g., adversarial example attacks). Traditional attacks
perform unstructured pixel-wise perturbation to fool the classifier. An
alternative approach is to have perturbations in the latent space. However,
such perturbations are hard to control due to the lack of interpretability and
disentanglement. In this paper, we propose a more practical adversarial attack
by designing structured perturbation with semantic meanings. Our proposed
technique manipulates the semantic attributes of images via …

adversarial lg manipulation

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium