May 6, 2022, 1:20 a.m. | Youhuan Yang, Lei Sun, Leyu Dai, Song Guo, Xiuqing Mao, Xiaoqin Wang, Bayi Xu

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNN) are widely used in various fields due to their
powerful performance, but recent studies have shown that deep learning models
are vulnerable to adversarial attacks-by adding a slight perturbation to the
input, the model will get wrong results. It is especially dangerous for some
systems with high security requirements, so this paper proposes a new defense
method by using the model super-fitting status. Model's adversarial robustness
(i.e., the accuracry under adversarial attack) has been greatly improved …

adversarial attack box lg super work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA