all InfoSec news
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems. (arXiv:2111.09999v2 [cs.CV] UPDATED)
July 27, 2022, 1:20 a.m. | Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe
cs.CR updates on arXiv.org arxiv.org
Deep neural networks are vulnerable to attacks from adversarial inputs and,
more recently, Trojans to misguide or hijack the model's decision. We expose
the existence of an intriguing class of spatially bounded, physically
realizable, adversarial examples -- Universal NaTuralistic adversarial paTches
-- we call TnTs, by exploring the superset of the spatially bounded adversarial
example space and the natural input space within generative adversarial
networks. Now, an adversary can arm themselves with a patch that is
naturalistic, less malicious-looking, physically …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Security Solution Architect
@ Civica | London, England, United Kingdom
Information Security Officer (80-100%)
@ SIX Group | Zurich, CH
Cloud Information Systems Security Engineer
@ Analytic Solutions Group | Chantilly, Virginia, United States
SRE Engineer & Security Software Administrator
@ Talan | Mexico City, Spain