July 27, 2022, 1:20 a.m. | Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe

cs.CR updates on arXiv.org arxiv.org

Deep neural networks are vulnerable to attacks from adversarial inputs and,
more recently, Trojans to misguide or hijack the model's decision. We expose
the existence of an intriguing class of spatially bounded, physically
realizable, adversarial examples -- Universal NaTuralistic adversarial paTches
-- we call TnTs, by exploring the superset of the spatially bounded adversarial
example space and the natural input space within generative adversarial
networks. Now, an adversary can arm themselves with a patch that is
naturalistic, less malicious-looking, physically …

adversarial attacks network neural network patches systems

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Solution Architect

@ Civica | London, England, United Kingdom

Information Security Officer (80-100%)

@ SIX Group | Zurich, CH

Cloud Information Systems Security Engineer

@ Analytic Solutions Group | Chantilly, Virginia, United States

SRE Engineer & Security Software Administrator

@ Talan | Mexico City, Spain