July 28, 2022, 1:20 a.m. | Deyin Liu, Lin Wu, Farid Boussaid, Mohammed Bennamoun

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples that are crafted with imperceptible perturbations, i.e., a small
change in an input image can induce a mis-classification, and thus threatens
the reliability of deep learning based deployment systems. Adversarial training
(AT) is often adopted to improve the robustness of DNNs through training a
mixture of corrupted and clean data. However, most of AT based methods are
ineffective in dealing with \textit{transferred adversarial examples} which are
generated to …

adversarial defense input lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium