June 27, 2022, 1:20 a.m. | Zitao Chen, Pritam Dash, Karthik Pattabiraman

cs.CR updates on arXiv.org arxiv.org

Adversarial patch attacks that inject arbitrary distortions within a bounded
region of an image, can trigger misclassification in deep neural networks
(DNNs). These attacks are robust (i.e., physically realizable) and universally
malicious, and hence represent a severe security threat to real-world DNN-based
systems.


This work proposes Jujutsu, a two-stage technique to detect and mitigate
robust and universal adversarial patch attacks. We first observe that patch
attacks often yield large influence on the prediction output in order to
dominate the prediction …

adversarial attacks patch strength

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Systems Administration

@ Peraton | Washington, DC, United States

Android Security Engineer, Public Sector

@ Google | Reston, VA, USA

Lead Electronic Security Engineer, CPP - Federal Facilities - Hybrid

@ Black & Veatch | Denver, CO, US

Profissional Sênior de Compliance & Validação em TI - Montes Claros (MG)

@ Novo Nordisk | Montes Claros, Minas Gerais, BR

Principal Engineer, Product Security Engineering

@ Google | Sunnyvale, CA, USA