all InfoSec news
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective. (arXiv:2206.12227v1 [cs.CR])
June 27, 2022, 1:20 a.m. | Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, Jin Song Dong
cs.CR updates on arXiv.org arxiv.org
Neural networks have been widely applied in security applications such as
spam and phishing detection, intrusion prevention, and malware detection. This
black-box method, however, often has uncertainty and poor explainability in
applications. Furthermore, neural networks themselves are often vulnerable to
adversarial attacks. For those reasons, there is a high demand for trustworthy
and rigorous methods to verify the robustness of neural network models.
Adversarial robustness, which concerns the reliability of a neural network when
dealing with maliciously manipulated inputs, is …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Director, Threat and Attack Research
@ Singtel | Macquarie Park, Australia
Manager Information Security
@ Diebold Nixdorf | Remote, United States
Senior Analyst, IT Information Security
@ IHG | GA, United States
Eurizon Capital SGR - Compliance Senior Specialist
@ Intesa Sanpaolo | Milano, IT
Tier 1 Fusion Security Analyst
@ Nielsen | Bengaluru, India