Jan. 29, 2024, 2:10 a.m. | Lukas Koller, Tobias Ladner, Matthias Althoff

cs.CR updates on arXiv.org arxiv.org

Neural networks are vulnerable to adversarial attacks, i.e., small input
perturbations can result in substantially different outputs of a neural
network. Safety-critical environments require neural networks that are robust
against input perturbations. However, training and formally verifying robust
neural networks is challenging. We address this challenge by employing, for the
first time, a end-to-end set-based training procedure that trains robust neural
networks for formal verification. Our training procedure drastically simplifies
the subsequent formal robustness verification of the trained neural network. …

address adversarial adversarial attacks arxiv attacks can challenge critical end end-to-end environments input network networks neural network neural networks result safety safety-critical training verification vulnerable

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Application Security Engineer, Application Security

@ Miro | Amsterdam, NL

SOC Analyst (m/w/d)

@ LANXESS | Leverkusen, NW, DE, 51373

Lead Security Solutions Engineer (Remote, North America)

@ Dynatrace | Waltham, MA, United States