all InfoSec news
When adversarial examples are excusable. (arXiv:2204.11985v1 [cs.LG])
April 27, 2022, 1:20 a.m. | Pieter-Jan Kindermans, Charles Staats
cs.CR updates on arXiv.org arxiv.org
Neural networks work remarkably well in practice and theoretically they can
be universal approximators. However, they still make mistakes and a specific
type of them called adversarial errors seem inexcusable to humans. In this
work, we analyze both test errors and adversarial errors on a well controlled
but highly non-linear visual classification problem. We find that, when
approximating training on infinite data, test errors tend to be close to the
ground truth decision boundary. Qualitatively speaking these are also more …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Check Team Members / Cyber Consultants / Pen Testers
@ Resillion | Birmingham, United Kingdom
Security Officer Field Training Officer- Full Time (Harrah's LV)
@ Caesars Entertainment | Las Vegas, NV, United States
Cybersecurity Subject Matter Expert (SME)
@ SMS Data Products Group, Inc. | Fort Belvoir, VA, United States
AWS Security Engineer
@ IntelliPro Group Inc. | Palo Alto, CA
Information Security Analyst
@ Freudenberg Group | Alajuela