March 1, 2024, 5:11 a.m. | Anan Kabaha, Dana Drachsler-Cohen

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.19322v1 Announce Type: cross
Abstract: Neural networks are successful in various applications but are also susceptible to adversarial attacks. To show the safety of network classifiers, many verifiers have been introduced to reason about the local robustness of a given input to a given perturbation. While successful, local robustness cannot generalize to unseen inputs. Several works analyze global robustness properties, however, neither can provide a precise guarantee about the cases where a network classifier does not change its classification. In …

adversarial adversarial attacks applications arxiv attacks cs.cr cs.lg cs.pl global input inputs local network networks neural networks robustness safety verification

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Principal Security Engineer

@ Activision Blizzard | Work from Home - CA

Security Engineer- Systems Integration

@ Meta | Bellevue, WA | Menlo Park, CA | New York City

Lead Security Engineer (Digital Forensic and IR Analyst)

@ Blue Yonder | Hyderabad

Senior Principal IAM Engineering Program Manager Cybersecurity

@ Providence | Redmond, WA, United States

Information Security Analyst II or III

@ Entergy | The Woodlands, Texas, United States