April 23, 2024, 4:11 a.m. | Haniyeh Ehsani Oskouie, Farzan Farnia

cs.CR updates on arXiv.org arxiv.org

arXiv:2212.03095v2 Announce Type: replace-cross
Abstract: Interpreting neural network classifiers using gradient-based saliency maps has been extensively studied in the deep learning literature. While the existing algorithms manage to achieve satisfactory performance in application to standard image recognition datasets, recent works demonstrate the vulnerability of widely-used gradient-based interpretation schemes to norm-bounded perturbations adversarially designed for every individual input sample. However, such adversarial perturbations are commonly designed using the knowledge of an input sample, and hence perform sub-optimally in application to an …

adversarial algorithms application arxiv cs.ai cs.cr cs.cv cs.lg datasets deep learning image literature manage maps network networks neural network neural networks performance recognition standard stat.ml vulnerability

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Intern, Cyber Security Vulnerability Management

@ Grab | Petaling Jaya, Malaysia

Compliance - Global Privacy Office - Associate - Bengaluru

@ Goldman Sachs | Bengaluru, Karnataka, India

Cyber Security Engineer (m/w/d) Operational Technology

@ MAN Energy Solutions | Oberhausen, DE, 46145

Armed Security Officer - Hospital

@ Allied Universal | Sun Valley, CA, United States

Governance, Risk and Compliance Officer (Africa)

@ dLocal | Lagos (Remote)