Feb. 20, 2024, 5:11 a.m. | Jun-Jie Zhang, Deyu Meng

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.10983v1 Announce Type: cross
Abstract: Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks. Such attacks, born from the gradient of the loss function relative to the input, are discerned as input conjugates, revealing a systemic fragility within the network structure. Intriguingly, a mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity. This inherent susceptibility within neural network systems is generally intrinsic, highlighting not only the …

adversarial adversarial attacks analysis arxiv attacks born cs.cr cs.lg emerging function input loss network networks neural network neural networks non quant-ph quantum random relative role structure system vulnerabilities vulnerability

Security Engineer

@ SNC-Lavalin | GB.Bristol.The Hub

Application Security Engineer

@ Virtru | Remote

SC2024-003563 Firewall Coordinator (NS) - TUE 21 May

@ EMW, Inc. | Mons, Wallonia, Belgium

Senior Application Security Engineer

@ Fortis Games | Remote - Canada

DevSecOps Manager

@ Philips | Bengaluru – Embassy Business Hub

Information System Security Manager (ISSM)

@ ARA | Raleigh, North Carolina, United States