March 23, 2023, 1:10 a.m. | Hao Wang, Chen Li, Jinzhe Jiang, Xin Zhang, Yaqian Zhao, Weifeng Gong

cs.CR updates on arXiv.org arxiv.org

Recently, the robustness of deep learning models has received widespread
attention, and various methods for improving model robustness have been
proposed, including adversarial training, model architecture modification,
design of loss functions, certified defenses, and so on. However, the principle
of the robustness to attacks is still not fully understood, also the related
research is still not sufficient. Here, we have identified a significant factor
that affects the robustness of models: the distribution characteristics of
softmax values for non-real label samples. …

adversarial architecture attacks attention certified deep learning design distribution factor functions loss modification research robustness training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA