Web: http://arxiv.org/abs/2204.11985

April 27, 2022, 1:20 a.m. | Pieter-Jan Kindermans, Charles Staats

cs.CR updates on arXiv.org arxiv.org

Neural networks work remarkably well in practice and theoretically they can
be universal approximators. However, they still make mistakes and a specific
type of them called adversarial errors seem inexcusable to humans. In this
work, we analyze both test errors and adversarial errors on a well controlled
but highly non-linear visual classification problem. We find that, when
approximating training on infinite data, test errors tend to be close to the
ground truth decision boundary. Qualitatively speaking these are also more …

adversarial lg

More from arxiv.org / cs.CR updates on arXiv.org

Information Systems Security Officer (ISSO)

@ Spry Methods | Denver, CO

Client Manager - Cybersecurity - Nashville Enterprise

@ Optiv | Nashville, TN

Threat Analyst | Remote, USA

@ Optiv | Minneapolis, MN

Senior Cyber Security SME

@ Node.Digital | Dulles, Virginia, United States

Junior Security Engineer, Applications

@ BetterHelp | Mountain View, California, United States

Information Security Analyst II

@ SOPHiA GENETICS | Lausanne, Vaud, Switzerland

Product Security Engineer

@ Elastic | United States

Senior Network Exploitation Analyst

@ Barbaricum | Washington, DC

Junior Security Engineer, Blue Team

@ BetterHelp | Mountain View, California, United States

Security Analyst, Security Operations (Threat Hunting, Operations, and Response)

@ GitHub | Remote - US

Security Engineer III - Information Security, Active Directory

@ Riot Games, Inc. | Los Angeles, USA

Staff Security Engineer, Application Security

@ Lyft | Mexico City, Mexico