Web: http://arxiv.org/abs/2201.02504

Jan. 10, 2022, 2:20 a.m. | Guoliang Dong, Jingyi Wang, Jun Sun, Sudipta Chattopadhyay, Xinyu Wang, Ting Dai, Jie Shi, Jin Song Dong

cs.CR updates on arXiv.org arxiv.org

It is known that neural networks are subject to attacks through adversarial
perturbations, i.e., inputs which are maliciously crafted through perturbations
to induce wrong predictions. Furthermore, such attacks are impossible to
eliminate, i.e., the adversarial perturbation is still possible after applying
mitigation methods such as adversarial training. Multiple approaches have been
developed to detect and reject such adversarial inputs, mostly in the image
domain. Rejecting suspicious inputs however may not be always feasible or
ideal. First, normal inputs may be …

More from arxiv.org / cs.CR updates on arXiv.org

Head of Information Security

@ Canny | Remote

Information Technology Specialist (INFOSEC)

@ U.S. Securities & Exchange Commission | Washington, D.C.

Information Security Manager - $90K-$180K - MANAG002176

@ Sound Transit | Seattle, WA

Sr. Software Security Architect

@ SAS | Remote

Senior Incident Responder

@ CipherTechs, Inc. | Remote

Data Security DevOps Engineer Senior/Intermediate

@ University of Michigan - ITS | Ann Arbor, MI