all InfoSec news
Understanding Adversarial Robustness Against On-manifold Adversarial Examples. (arXiv:2210.00430v1 [cs.LG])
Oct. 4, 2022, 1:20 a.m. | Jiancong Xiao, Liusha Yang, Yanbo Fan, Jue Wang, Zhi-Quan Luo
cs.CR updates on arXiv.org arxiv.org
Deep neural networks (DNNs) are shown to be vulnerable to adversarial
examples. A well-trained model can be easily attacked by adding small
perturbations to the original data. One of the hypotheses of the existence of
the adversarial examples is the off-manifold assumption: adversarial examples
lie off the data manifold. However, recent research showed that on-manifold
adversarial examples also exist. In this paper, we revisit the off-manifold
assumption and want to study a question: at what level is the poor performance …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Associate Manager, BPT Infrastructure & Ops (Security Engineer)
@ SC Johnson | PHL - Makati
Cybersecurity Analyst - Project Bound
@ NextEra Energy | Jupiter, FL, US, 33478
Lead Cyber Security Operations Center (SOC) Analyst
@ State Street | Quincy, Massachusetts
Junior Information Security Coordinator (Internship)
@ Garrison Technology | London, Waterloo, England, United Kingdom
Sr. Security Engineer
@ ScienceLogic | Reston, VA