Feb. 12, 2024, 5:10 a.m. | Amira Guesmi Ioan Marius Bilasco Muhammad Shafique Ihsen Alouani

cs.CR updates on arXiv.org arxiv.org

Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks, as humans can readily detect and eliminate unnatural manipulations. To overcome this limitation, recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches, which may not catch human's attention. However, these approaches suffer from a limited latent space which leads …

adversarial adversarial attacks art attacks can cs.cr cs.cv deep learning detect detection evaluation humans object physical producing real systems threat work world

Director of the Air Force Cyber Technical Center of Excellence (CyTCoE)

@ Air Force Institute of Technology | Dayton, OH, USA

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Application Security Engineering Manager - Security Operations (Boston)

@ Klaviyo | Boston, MA

Risk & Compliance Officer / CISO 80-100% (w/m/d)

@ Wüest Partner | Zürich, Switzerland

IT Security Professional II

@ Teva Pharmaceuticals | Bengaluru, India, 560052

Senior Security Engineer

@ SNC-Lavalin | GB.Epsom.Woodcote Grove