April 22, 2024, 4:11 a.m. | Lukas Koller, Tobias Ladner, Matthias Althoff

cs.CR updates on arXiv.org arxiv.org

arXiv:2401.14961v2 Announce Type: replace-cross
Abstract: Neural networks are vulnerable to adversarial attacks, i.e., small input perturbations can significantly affect the outputs of a neural network. In safety-critical environments, the inputs often contain noisy sensor data; hence, in this case, neural networks that are robust against input perturbations are required. To ensure safety, the robustness of a neural network must be formally verified. However, training and formally verifying robust neural networks is challenging. We address both of these challenges by employing, …

adversarial adversarial attacks arxiv attacks can case critical cs.cr cs.lg cs.lo data environments input inputs network networks neural network neural networks noisy robustness safety safety-critical sensor sensor data training verification vulnerable

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Associate Engineer (Security Operations Centre)

@ People Profilers | Singapore, Singapore, Singapore

DevSecOps Engineer

@ Australian Payments Plus | Sydney, New South Wales, Australia

Senior Cybersecurity Specialist

@ SmartRecruiters Inc | Poland, Poland