all InfoSec news
Jacobian Norm with Selective Input Gradient Regularization for Improved and Interpretable Adversarial Defense. (arXiv:2207.13036v4 [cs.LG] UPDATED)
Nov. 15, 2022, 2:20 a.m. | Deyin Liu, Lin Wu, Haifeng Zhao, Farid Boussaid, Mohammed Bennamoun, Xianghua Xie
cs.CR updates on arXiv.org arxiv.org
Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples that are crafted with imperceptible perturbations, i.e., a small
change in an input image can induce a mis-classification, and thus threatens
the reliability of deep learning based deployment systems. Adversarial training
(AT) is often adopted to improve robustness through training a mixture of
corrupted and clean data. However, most of AT based methods are ineffective in
dealing with transferred adversarial examples which are generated to fool a
wide …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Regional Leader, Cyber Crisis Communications
@ Google | United Kingdom
Regional Intelligence Manager, Compliance, Safety and Risk Management
@ Google | London, UK
Senior Analyst, Endpoint Security
@ Scotiabank | Toronto, ON, CA, M1K5L1
Software Engineer, Security/Privacy, Google Cloud
@ Google | Bengaluru, Karnataka, India
Senior Security Engineer
@ Coinbase | Remote - USA