all InfoSec news
Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature Randomization. (arXiv:2209.04930v1 [cs.CR])
Sept. 13, 2022, 1:20 a.m. | Ehsan Nowroozi, Mohammadreza Mohammadi, Pargol Golmohammadi, Yassine Mekdad, Mauro Conti, Selcuk Uluagac
cs.CR updates on arXiv.org arxiv.org
In the past decades, the rise of artificial intelligence has given us the
capabilities to solve the most challenging problems in our day-to-day lives,
such as cancer prediction and autonomous navigation. However, these
applications might not be reliable if not secured against adversarial attacks.
In addition, recent works demonstrated that some adversarial examples are
transferable across different models. Therefore, it is crucial to avoid such
transferability via robust models that resist adversarial manipulations. In
this paper, we propose a feature …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Azure Security Architect
@ First Quality | Remote US - Eastern or Central Timezone
Senior Security Engineer
@ LRQA | Birmingham, GB, B37 7ES
Product Security Intern
@ Sinch | Chicago, Illinois, United States
Cyber Support Engineer
@ Darktrace | New York