April 27, 2022, 1:20 a.m. | Pieter-Jan Kindermans, Charles Staats

cs.CR updates on arXiv.org arxiv.org

Neural networks work remarkably well in practice and theoretically they can
be universal approximators. However, they still make mistakes and a specific
type of them called adversarial errors seem inexcusable to humans. In this
work, we analyze both test errors and adversarial errors on a well controlled
but highly non-linear visual classification problem. We find that, when
approximating training on infinite data, test errors tend to be close to the
ground truth decision boundary. Qualitatively speaking these are also more …

adversarial lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Check Team Members / Cyber Consultants / Pen Testers

@ Resillion | Birmingham, United Kingdom

Security Officer Field Training Officer- Full Time (Harrah's LV)

@ Caesars Entertainment | Las Vegas, NV, United States

Cybersecurity Subject Matter Expert (SME)

@ SMS Data Products Group, Inc. | Fort Belvoir, VA, United States

AWS Security Engineer

@ IntelliPro Group Inc. | Palo Alto, CA

Information Security Analyst

@ Freudenberg Group | Alajuela