Feb. 7, 2024, 5:10 a.m. | Payam Delgosha Hamed Hassani Ramtin Pedarsani

cs.CR updates on arXiv.org arxiv.org

We have widely observed that neural networks are vulnerable to small additive perturbations to the input causing misclassification. In this paper, we focus on the $\ell_0$-bounded adversarial attacks, and aim to theoretically characterize the performance of adversarial training for an important class of truncated classifiers. Such classifiers are shown to have strong performance empirically, as well as theoretically in the Gaussian mixture model, in the $\ell_0$-adversarial setting. The main contribution of this paper is to prove a novel generalization bound …

adversarial adversarial attacks aim attacks class cs.cr cs.lg focus important input networks neural networks performance training truncated vulnerable

Deputy Chief Information Security Officer

@ United States Holocaust Memorial Museum | Washington, DC

Humbly Confident Security Lead

@ YNAB | Remote

Information Technology Specialist II: Information Security Engineer

@ WBCP, Inc. | Pasadena, CA.

Director of the Air Force Cyber Technical Center of Excellence (CyTCoE)

@ Air Force Institute of Technology | Dayton, OH, USA

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

IT-Security Analyst "Managed Cloud" Fokus MS-Sentinel (m/w/d)*

@ GISA GmbH | Halle, DE