Dec. 7, 2022, 2:10 a.m. | Haniyeh Ehsani Oskouie, Farzan Farnia

cs.CR updates on arXiv.org arxiv.org

Interpreting neural network classifiers using gradient-based saliency maps
has been extensively studied in the deep learning literature. While the
existing algorithms manage to achieve satisfactory performance in application
to standard image recognition datasets, recent works demonstrate the
vulnerability of widely-used gradient-based interpretation schemes to
norm-bounded perturbations adversarially designed for every individual input
sample. However, such adversarial perturbations are commonly designed using the
knowledge of an input sample, and hence perform sub-optimally in application to
an unknown or constantly changing data …

adversarial networks neural networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Regional Leader, Cyber Crisis Communications

@ Google | United Kingdom

Regional Intelligence Manager, Compliance, Safety and Risk Management

@ Google | London, UK

Senior Analyst, Endpoint Security

@ Scotiabank | Toronto, ON, CA, M1K5L1

Software Engineer, Security/Privacy, Google Cloud

@ Google | Bengaluru, Karnataka, India

Senior Security Engineer

@ Coinbase | Remote - USA