all InfoSec news
Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization. (arXiv:2207.13036v1 [cs.LG])
July 27, 2022, 1:20 a.m. | Deyin Liu, Lin Wu, Farid Boussaid, Mohammed Bennamoun
cs.CR updates on arXiv.org arxiv.org
Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples that are crafted with imperceptible perturbations, i.e., a small
change in an input image can induce a mis-classification, and thus threatens
the reliability of deep learning based deployment systems. Adversarial training
(AT) is frequently used to improve the robustness of DNNs, which can improve
the robustness in training a mixture of corrupted and clean data. However,
existing AT based methods are either computationally expensive in generating
such adversarial …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Engineering Manager - Cloud Security team
@ SentinelOne | Prague, Czech Republic
Legal & Compliance Apprentice (H/F)
@ Novo Nordisk | Puteaux, Île-de-France, FR
Manager, Governance Risk & Compliance
@ Comcast | Virtual