Dec. 15, 2023, 2:25 a.m. | Ayoub Arous, Andres F Lopez-Lopera, Nael Abu-Ghazaleh, Ihsen Alouani

cs.CR updates on arXiv.org arxiv.org

In this paper, we investigate the following question: Can we obtain
adversarially-trained models without training on adversarial examples? Our
intuition is that training a model with inherent stochasticity, i.e.,
optimizing the parameters by minimizing a stochastic loss function, yields a
robust expectation function that is non-stochastic. In contrast to related
methods that introduce noise at the input level, our proposed approach
incorporates inherent stochasticity by embedding Gaussian noise within the
layers of the NN model at training time. We model …

adversarial function intuition loss may noise non question training

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Computer and Forensics Investigator

@ ManTech | 221BQ - Cstmr Site,Springfield,VA

Senior Security Analyst

@ Oracle | United States

Associate Vulnerability Management Specialist

@ Diebold Nixdorf | Hyderabad, Telangana, India