June 29, 2022, 1:20 a.m. | Roland S. Zimmermann, Wieland Brendel, Florian Tramer, Nicholas Carlini

cs.CR updates on arXiv.org arxiv.org

Hundreds of defenses have been proposed to make deep neural networks robust
against minimal (adversarial) input perturbations. However, only a handful of
these defenses held up their claims because correctly evaluating robustness is
extremely challenging: Weak attacks often fail to find adversarial examples
even if they unknowingly exist, thereby making a vulnerable network look
robust. In this paper, we propose a test to identify weak attacks, and thus
weak defense evaluations. Our test slightly modifies a neural network to
guarantee …

adversarial lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Program Associate, Cyber Risk

@ Kroll | Toronto, ONT, Canada

Cybersecurity Operations Engineer 2

@ Humana | Remote US

Vice President - Lead Security Engineer (SECS04)

@ JPMorgan Chase & Co. | Columbus, OH, United States

Security Specialist

@ BGIS | Markham, ON, Canada