April 3, 2023, 1:10 a.m. | Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) are vulnerable to adversarial examples, which may
lead to catastrophe in security-critical domains. Numerous detection methods
are proposed to characterize the feature uniqueness of adversarial examples, or
to distinguish DNN's behavior activated by the adversarial examples. Detections
based on features cannot handle adversarial examples with large perturbations.
Besides, they require a large amount of specific adversarial examples. Another
mainstream, model-based detections, which characterize input properties by
model behaviors, suffer from heavy computation cost. To address the …

address adversarial catastrophe computation concept cost critical detection detections domains features input large local may networks neural networks security vulnerable

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens