all InfoSec news
Finite Gaussian Neurons: Defending against adversarial attacks by making neural networks say "I don't know". (arXiv:2306.07796v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Since 2014, artificial neural networks have been known to be vulnerable to
adversarial attacks, which can fool the network into producing wrong or
nonsensical outputs by making humanly imperceptible alterations to inputs.
While defenses against adversarial attacks have been proposed, they usually
involve retraining a new neural network from scratch, a costly task. In this
work, I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture
for artificial neural networks. My works aims to: - easily convert existing
models …
adversarial adversarial attacks artificial attacks defending don inputs making network networks neural networks producing vulnerable