July 14, 2022, 1:20 a.m. | Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker

cs.CR updates on arXiv.org arxiv.org

Vulnerability to adversarial attacks is one of the principal hurdles to the
adoption of deep learning in safety-critical applications. Despite significant
efforts, both practical and theoretical, training deep learning models robust
to adversarial attacks is still an open problem. In this paper, we analyse the
geometry of adversarial attacks in the large-data, overparameterized limit for
Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to
gradient-based attacks arises as a result of degeneracy in the data
distribution, i.e., …

adversarial attacks lg networks neural networks robustness

Junior Cybersecurity Analyst - 3346195

@ TCG | 725 17th St NW, Washington, DC, USA

Cyber Intelligence, Senior Advisor

@ Peraton | Chantilly, VA, United States

Consultant Cybersécurité H/F - Innovative Tech

@ Devoteam | Marseille, France

Manager, Internal Audit (GIA Cyber)

@ Standard Bank Group | Johannesburg, South Africa

Staff DevSecOps Engineer

@ Raft | San Antonio, TX (Local Remote)

Domain Leader Cybersecurity

@ Alstom | Bengaluru, KA, IN