all InfoSec news
End-To-End Set-Based Training for Neural Network Verification. (arXiv:2401.14961v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Neural networks are vulnerable to adversarial attacks, i.e., small input
perturbations can result in substantially different outputs of a neural
network. Safety-critical environments require neural networks that are robust
against input perturbations. However, training and formally verifying robust
neural networks is challenging. We address this challenge by employing, for the
first time, a end-to-end set-based training procedure that trains robust neural
networks for formal verification. Our training procedure drastically simplifies
the subsequent formal robustness verification of the trained neural network. …
address adversarial adversarial attacks arxiv attacks can challenge critical end end-to-end environments input network networks neural network neural networks result safety safety-critical training verification vulnerable