June 14, 2023, 1:10 a.m. | Felix Grezes

cs.CR updates on arXiv.org arxiv.org

Since 2014, artificial neural networks have been known to be vulnerable to
adversarial attacks, which can fool the network into producing wrong or
nonsensical outputs by making humanly imperceptible alterations to inputs.
While defenses against adversarial attacks have been proposed, they usually
involve retraining a new neural network from scratch, a costly task. In this
work, I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture
for artificial neural networks. My works aims to: - easily convert existing
models …

adversarial adversarial attacks artificial attacks defending don inputs making network networks neural networks producing vulnerable

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Principal Security Researcher (Advanced Threat Prevention)

@ Palo Alto Networks | Santa Clara, CA, United States

EWT Infosec | IAM Technical Security Consultant - Manager

@ KPMG India | Bengaluru, Karnataka, India

Security Engineering Operations Manager

@ Gusto | San Francisco, CA; Denver, CO; Remote

Network Threat Detection Engineer

@ Meta | Denver, CO | Reston, VA | Menlo Park, CA | Washington, DC