all InfoSec news
Set-Based Training for Neural Network Verification
April 22, 2024, 4:11 a.m. | Lukas Koller, Tobias Ladner, Matthias Althoff
cs.CR updates on arXiv.org arxiv.org
Abstract: Neural networks are vulnerable to adversarial attacks, i.e., small input perturbations can significantly affect the outputs of a neural network. In safety-critical environments, the inputs often contain noisy sensor data; hence, in this case, neural networks that are robust against input perturbations are required. To ensure safety, the robustness of a neural network must be formally verified. However, training and formally verifying robust neural networks is challenging. We address both of these challenges by employing, …
adversarial adversarial attacks arxiv attacks can case critical cs.cr cs.lg cs.lo data environments input inputs network networks neural network neural networks noisy robustness safety safety-critical sensor sensor data training verification vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Financial Crimes Compliance - Senior - Consulting - Location Open
@ EY | New York City, US, 10001-8604
Software Engineer - Cloud Security
@ Neo4j | Malmö
Security Consultant
@ LRQA | Singapore, Singapore, SG, 119963
Identity Governance Consultant
@ Allianz | Sydney, NSW, AU, 2000
Educator, Cybersecurity
@ Brain Station | Toronto
Principal Security Engineer
@ Hippocratic AI | Palo Alto