all InfoSec news
On the Robustness of Bayesian Neural Networks to Adversarial Attacks
Feb. 29, 2024, 5:11 a.m. | Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker
cs.CR updates on arXiv.org arxiv.org
Abstract: Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparameterized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy …
adoption adversarial adversarial attacks applications arxiv attacks critical cs.ai cs.cr cs.lg deep learning networks neural networks problem robustness safety safety-critical training vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 12 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 12 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 12 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Network Security Engineer
@ Meta | Menlo Park, CA | Remote, US
Security Engineer, Investigations - i3
@ Meta | Washington, DC
Threat Investigator- Security Analyst
@ Meta | Menlo Park, CA | Seattle, WA | Washington, DC
Security Operations Engineer II
@ Microsoft | Redmond, Washington, United States
Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Bug Bounty -- Associate -- Dallas
@ Goldman Sachs | Dallas, Texas, United States