all InfoSec news
Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks. (arXiv:2208.09872v1 [cs.LG])
Aug. 23, 2022, 1:20 a.m. | Zhaodi Zhang, Yiting Wu, Si Liu, Jing Liu, Min Zhang
cs.CR updates on arXiv.org arxiv.org
The robustness of deep neural networks is crucial to modern AI-enabled
systems and should be formally verified. Sigmoid-like neural networks have been
adopted in a wide range of applications. Due to their non-linearity,
Sigmoid-like activation functions are usually over-approximated for efficient
verification, which inevitably introduces imprecision. Considerable efforts
have been devoted to finding the so-called tighter approximations to obtain
more precise verification results. However, existing tightness definitions are
heuristic and lack theoretical foundations. We conduct a thorough empirical
analysis of …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Azure DevSecOps Cloud Engineer II
@ Prudent Technology | McLean, VA, USA
Security Engineer III - Python, AWS
@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India
SOC Analyst (Threat Hunter)
@ NCS | Singapore, Singapore
Managed Services Information Security Manager
@ NTT DATA | Sydney, Australia
Senior Security Engineer (Remote)
@ Mattermost | United Kingdom
Penetration Tester (Part Time & Remote)
@ TestPros | United States - Remote