all InfoSec news
AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking. (arXiv:2303.18131v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Deep neural networks (DNNs) are vulnerable to adversarial examples, which may
lead to catastrophe in security-critical domains. Numerous detection methods
are proposed to characterize the feature uniqueness of adversarial examples, or
to distinguish DNN's behavior activated by the adversarial examples. Detections
based on features cannot handle adversarial examples with large perturbations.
Besides, they require a large amount of specific adversarial examples. Another
mainstream, model-based detections, which characterize input properties by
model behaviors, suffer from heavy computation cost. To address the …
address adversarial catastrophe computation concept cost critical detection detections domains features input large local may networks neural networks security vulnerable