all InfoSec news
Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
April 23, 2024, 4:11 a.m. | Haniyeh Ehsani Oskouie, Farzan Farnia
cs.CR updates on arXiv.org arxiv.org
Abstract: Interpreting neural network classifiers using gradient-based saliency maps has been extensively studied in the deep learning literature. While the existing algorithms manage to achieve satisfactory performance in application to standard image recognition datasets, recent works demonstrate the vulnerability of widely-used gradient-based interpretation schemes to norm-bounded perturbations adversarially designed for every individual input sample. However, such adversarial perturbations are commonly designed using the knowledge of an input sample, and hence perform sub-optimally in application to an …
adversarial algorithms application arxiv cs.ai cs.cr cs.cv cs.lg datasets deep learning image literature manage maps network networks neural network neural networks performance recognition standard stat.ml vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368
@ Allen Integrated Solutions | Chantilly, Virginia, United States
Consultant Sécurité SI H/F Gouvernance - Risques - Conformité
@ Hifield | Sèvres, France
Infrastructure Consultant
@ Telefonica Tech | Belfast, United Kingdom