all InfoSec news
PASA: Attack Agnostic Unsupervised Adversarial Detection using Prediction & Attribution Sensitivity Analysis
April 18, 2024, 4:11 a.m. | Dipkamal Bhusal, Md Tanvirul Alam, Monish K. Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
cs.CR updates on arXiv.org arxiv.org
Abstract: Deep neural networks for classification are vulnerable to adversarial attacks, where small perturbations to input samples lead to incorrect predictions. This susceptibility, combined with the black-box nature of such networks, limits their adoption in critical applications like autonomous driving. Feature-attribution-based explanation methods provide relevance of input features for model predictions on input samples, thus explaining model decisions. However, we observe that both model predictions and feature attributions for input samples are sensitive to noise. We …
adoption adversarial adversarial attacks analysis applications arxiv attack attacks attribution autonomous autonomous driving box classification critical cs.ai cs.cr cs.lg detection driving feature input nature networks neural networks prediction predictions vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
EY- GDS- Cybersecurity- Staff
@ EY | Miguel Hidalgo, MX, 11520
Staff Security Operations Engineer
@ Workiva | Ames
Public Relations Senior Account Executive (B2B Tech/Cybersecurity/Enterprise)
@ Highwire Public Relations | Los Angeles, CA
Airbus Canada - Responsable Cyber sécurité produit / Product Cyber Security Responsible
@ Airbus | Mirabel
Investigations (OSINT) Manager
@ Logically | India
Security Engineer I, Offensive Security Penetration Testing
@ Amazon.com | US, NY, Virtual Location - New York