all InfoSec news
The Dimpled Manifold Model of Adversarial Examples in Machine Learning. (arXiv:2106.10151v2 [cs.LG] UPDATED)
June 2, 2022, 1:20 a.m. | Adi Shamir, Odelia Melamed, Oriel BenShmuel
cs.CR updates on arXiv.org arxiv.org
The extreme fragility of deep neural networks, when presented with tiny
perturbations in their inputs, was independently discovered by several research
groups in 2013. However, despite enormous effort, these adversarial examples
remained a counterintuitive phenomenon with no simple testable explanation. In
this paper, we introduce a new conceptual framework for how the decision
boundary between classes evolves during training, which we call the {\em
Dimpled Manifold Model}. In particular, we demonstrate that training is divided
into two distinct phases. The …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
DevSecOps Engineer
@ LinQuest | Beavercreek, Ohio, United States
Senior Developer, Vulnerability Collections (Contractor)
@ SecurityScorecard | Remote (Turkey or Latin America)
Cyber Security Intern 03416 NWSOL
@ North Wind Group | RICHLAND, WA
Senior Cybersecurity Process Engineer
@ Peraton | Fort Meade, MD, United States
Sr. Manager, Cybersecurity and Info Security
@ AESC | Smyrna, TN 37167, Smyrna, TN, US | Santa Clara, CA 95054, Santa Clara, CA, US | Florence, SC 29501, Florence, SC, US | Bowling Green, KY 42101, Bowling Green, KY, US