March 30, 2023, 1:10 a.m. | Wei Wei, Jiahuan Zhou, Ying Wu

cs.CR updates on arXiv.org arxiv.org

It is broadly known that deep neural networks are susceptible to being fooled
by adversarial examples with perturbations imperceptible by humans. Various
defenses have been proposed to improve adversarial robustness, among which
adversarial training methods are most effective. However, most of these methods
treat the training samples independently and demand a tremendous amount of
samples to train a robust network, while ignoring the latent structural
information among these samples. In this work, we propose a novel Local
Structure Preserving (LSP) …

adversarial beyond demand humans information local minimization network networks neural networks novel risk robustness train training work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Systems Administration

@ Peraton | Washington, DC, United States

Android Security Engineer, Public Sector

@ Google | Reston, VA, USA

Lead Electronic Security Engineer, CPP - Federal Facilities - Hybrid

@ Black & Veatch | Denver, CO, US

Profissional Sênior de Compliance & Validação em TI - Montes Claros (MG)

@ Novo Nordisk | Montes Claros, Minas Gerais, BR

Principal Engineer, Product Security Engineering

@ Google | Sunnyvale, CA, USA